text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The last 7 days, have been very crazy for me. Last weekend, I ended up in Los Angeles, for a couple days, and was able to grab some Cuban Pastries: Before hopping back on a plane for Atlanta: Back in Georgia, I felt very much at home when I spotted some familiar local scenery, a guy with a mullet in a camaro (with a killer “G-Force” bumper sticker), next to a man in a costume in the middle of traffic, asking for money: Next, it was a Tuesday visit to the monthly Atlanta-Plone meeting. Where we discussed the upcoming Repoze Sprint/Visit: On Thursday, we met with Tres and Chris, who happened to write supervisor, and they gave a tremendous talk on WSGI, Repoze, and Deliverance, that blew the PyAtl crowd away. One of the more dramatic, show and tell, pieces, was a local demonstration of their “theme trac like Plone trick”. The crowd was blown away, when Tres and Chris stole, borrowed, pick your favorite word, the pyatl plone 3.0 site, and themed a localhost trac instance. We also saw a great debugging middleware WSGI tool, that “leaked” objects in the WSGI stack. WSGI is truly an incredible technology, and I am so excited about it, I almost can’t sleep. Next on Friday, we hunkered down at Georgia Tech, and starting playing with Repoze a little more: One silly idea that came up after a few beers at lunch, was writing the simplest possible WSGI application using the WSGI spec from Pep 333. By using Ian Bicking’s pythonpaste, Tres was able to walk me through setting up the most simple possible WSGI application. We used string substitution and pickle, and gave birth to A******Glue, AGlue, for short. AGlue is just a proof of concept, with a funny name. If you are use virtualenv, and pythonpaste, it is quite simple to make a little web application using WSGI. You really only need to create an /etc directory in your virtualenv, that includeds a .ini file, such as this: Step 1: Create a .ini file [server:main] use = egg:Paste#http host = 127.0.0.1 port = 8080 [app:aglue] paste.app_factory = aGlue.app:factory path = %(here)s/../var [pipeline:main] pipeline = egg:Paste#evalerror aglue Step 2: Next make some simple model.py like this: class Book(object): """A book object""" def __init__(self, ISBN, title, reviewer=None): self.ISBN = ISBN self.title = title self.reviewer = reviewer Step 3: Finally, make a app.py, or controller: import os import pickle from paste.request import parse_formvars from model import Bookpyatl.org</a> </p> """ row = """ <p>%(title)s <form method="post"><input type="hidden" name="index" value = "%(index)d"> <input type = "submit" name = "delete" value="delete"> </form> </p> """ epilogue=""" <form method="post"> <input type="text" name = "ISBN"> <input type="text" name = "title"> <input type="submit" name = "submit" value = "add"> </body></html> """ def middleFinger(environ, start_response): """Why did you use this, punk? """ form = parse_formvars(environ) if form: if 'submit' in form: book = Book(ISBN=form['ISBN'],title=form['title']) books.append(book) saveList() elif 'delete' in form: index = int(form['index']) del books[index] saveList() print form page = [template] for index in range(len(books)): book = books[index] page.append(row%{'index':index,'title':book.title}) page.append(epilogue) status = '200 OK' response_headers = [('Content-type', 'text/html')] start_response(status, response_headers) return [''.join(page)] def saveList(): file = open('/tmp/persistant.db', 'w') pickle.dump(books,file) file.close() def factory(global_config, persist = '/tmp/persistant.db',**local_config): global books books = [] if not os.path.exists(persist): saveList() else: file = open(persist) books = pickle.load(file) return middleFinger With that little bit of code, you get something like this: One thing I learned from the last few days, is that Ian Bicking is amazing! Between virtualenv, and pythonpaste alone, it is an incredible, how many tools he creates to help other Python programmers. Tres and Chris, are also equally amazing, and I would recommend trying to get them to come to your local user group for a Repoze/Deliverance talk too! There will be a video posted this week of their talk this week on YouTube, and I will also upload a more refined version of AGlue, to the cheeseshop in a few days.
http://www.oreillynet.com/onlamp/blog/2007/12/repoze_sprint_at_pyatl_aglue_i_1.html
crawl-002
refinedweb
714
54.93
I have the following Module which has 1 variable which contains a string for the first day of a hypothetical year, 1 method which outputs a string and another method which also outputs a string: module Week first_day = "Sunday" def weeks_in_month puts "There are 4 weeks in a month" end def weeks_in_year puts "There are 52 weeks in a year" end end class Decade include Week def firstday puts Week::first_day end end z = Decade.new z.weeks_in_month z.weeks_in_year z.firstday #Errors here undefined method `first_day' for Week:Module (NoMethodError) When writing a module the convention is to declare constants like this: module Week FIRST_DAY = 'Sunday' end Note that they're in ALL_CAPS. Anything that begins with a capital letter is treated as a constant. Lower-case names of that sort are treated as local variables. Generally it's bad form to access the constants of another module, it limits your ability to refactor how those are stored. Instead define a public accessor method: module Week def first_day FIRST_DAY end end Now you can call that externally: Week.first_day Note you can also change how that's implemented: module Week DAYS = %w[ Sunday Monday Tuesday ... Saturday ] def first_day DAYS.first end extend self # Makes methods callable like Week.first_day end The nice thing about that is the first_day method does exactly the same thing, no other code has to change. This makes refactoring significantly easier. Imagine if you had to track down and replace all those instances to Week::FIRST_DAY. There's some other things to note here. The first is that any time you call include on a module then you get the methods and constants loaded in locally. The second thing is when you define a mix-in module, be careful with your names to avoid potential conflict with the target class. Since you've mixed it in, you don't need the namespace prefix, just calling first_day should do it.
https://codedump.io/share/Xt1i4Y9oGYlk/1/modules-and-accessing-variables-from-modules-ruby-language
CC-MAIN-2021-39
refinedweb
324
60.55
05 December 2008 12:49 [Source: ICIS news] By Stuart Moir ?xml:namespace> “Recent shutdowns by plastics companies mean that chlorine production is extremely low,” a European producer said. “Over the next half-year the caustic soda market will tighten and prices will rise. We might see many producers switching to soda ash in 2009.” The European soda ash market has so far resisted the effects of the global financial crisis, with stable demand reported and producers currently running their plants at near full capacity. The detergent industry, which comprises roughly 10% of European soda ash consumption, shows no sign of decreasing. According to market players, the application of soda ash in the detergent-manufacturing process will rise over the course of 2009. As the chemical industry focuses more on environmentally friendly practices, soda ash seems an eco-friendly alternative to caustic soda in detergent production. Caustic soda is a base chemical that can cause chemical burns and blindness, whereas soda ash is biodegradable and a far less corrosive source of alkalinity. Soda ash is also effective at removing hard ions from water, an important characteristic for a detergent. Although soda ash could seem to be a logical replacement for caustic soda, there is an obstacle: making the switch could involve a potentially costly structural change for production facilities. “Caustic is added to the detergent process as a liquid,” a major European producer said. "Soda ash is a powder. It has to be transported to and within the site, made into a solution, and once at the necessary level of alkalinity it can then be pumped. “You are adding another step, when it might seem easier to use drums of liquid caustic.” However, the switch from caustic to ash is now being considered an economically feasible option, as a growing disparity in price between the two chemicals has developed. Caustic soda has risen continually against soda ash in 2008, and is currently assessed by global chemical market intelligence service ICIS pricing at $650-660 (€507-515)/DMT (dry metric tonne) FOB (free on board) NWE (northwest Europe). Prices for light soda ash bags are at $315-325/tonne FOB NWE, slightly under half the price of caustic soda. Higher energy costs in 2008 mean that soda ash producers will be looking to raise prices by around €40-50/tonne in 2009. Producers said this would not affect soda ash’s competitive edge, believing that the swing potential was in soda ash’s favour and still worth the hassle of plant alterations. Buyers shared this opinion. “Soda ash is an inexpensive filler to enhance the storage and dissolving properties of detergent. It is safer, more stable, and more cost effective than caustic, and is a good carrier of surfactants and dyes. It’s got a bright future in the industry,” said a buyer. The use of soda ash in other applications such as pulp and paper manufacture, chemical production, flue gas desulphurisation and carbon capture technologies was also increasing, market players said, pointing to a bright outlook for 2009 despite the chemical industry downturn. ($1 = €0.78) For more on soda ash
http://www.icis.com/Articles/2008/12/05/9177332/europe-soda-ash-gains-on-detergent-demand.html
CC-MAIN-2013-48
refinedweb
521
51.78
26464/which-of-them-is-better-between-java-vs-python-and-why There's a video that talks about the very same topic with an example that justifies the debate. Have a look for better understanding. Java: public class HelloWorld{public static void main(String[] args){System.out.println("Hello World");}} Python: print "Hello World" It looks simple and gets your job. Not too much of mess in Python Java is a general-purpose programming language while python web development is a high-level programing language delivering better code readability with shorter syntax. Java or Python, biggest debate ever, but honestly it totally depends on your requirements. In the fields of data analytics, data science, etc. uses python or R and not java. Lets remember one thing, these job description are once that are hot in market, when I say hot, they have higher demand in most of the companies-big or small. Moreover Pyhton is easy to understand and use, compared to java. One of the major drawback of python is parsing the whitespaces which puts people off python. I would definitely prefer python. Hey, I have been working on python for sometime now and I have worked on Java a little.In the beginning, I started coding with Java for practice and to build simple projects but once I switched to python, I have never wanted to go back to using Java again. For me, python is easier, simpler and more comfortable than Java. One might argue that Java is faster than python or mention few other advantages of Java over python. But according to me, Python is better. Here are some features of Python that i like: If you haven't learned either of them, then going for Java is a better option. Makes more sense as python can be applied in multiple places and is most suitable for use with the new trending technologies like machine learning and AI and data analytics. Python. Don’t even think about it to select another language as your first. Why? Well, Python is easy. Trust me on this one. My first major language was C++ and it lead me to contemplating a career change. Here’s a short snipped of C++ code for displaying “Hello world” on the screen #include <iostream> using namespace std; int main() { cout << "Hello world!" << endl; return 0; } Here’s the same thing in Python print("Hello world!") And I think that’s about it. Both languages are great in their own thing and what they’re used for but Python, hands down, is one of the best things a beginner could start with today. java and python both are good.java is more used in web application and also secure python is more used in machine learning and artificial intelligence.use can easy learn python and machine learning Hello Both of the Programming Language are very much essential and has its own merits and demerits . It would be best if one could gain the knowledge of both the Programming Language. When the time comes where you can choose only one language to start your Career then i suggest you to go with Python .I choose Python because it is very much user friendly , easy syntax, most used Programming language at present and It is expected to have a great Scope in the future. Python has more number of libraries compared to Java. If you Pursue Course for Python then you have the opportunity of working on Data Science Projects which is the trend of today. At present big company like Facebook , Instagram, yahoo and many more uses Python . Example with the help of program to show why i feel Python is better than Java Java program to print hello class A { public static void main(String args[]){ System.out.println("Hello World"); } } Python program to give output Hello # This program prints Hello, world! print('Hello, world!') We can see the difference in the level of difficulty in term of Programming. you can visit Quora answer to know more regarding this topic. Python is more productive language than Java. Python is an interpreted language with elegant syntax and makes it a very good option for scripting and rapid application development in many areas. ... Python code is much shorter, even though some Java “class shell” is not listed. Hey, SQL databases are primarily called Relational Databases (RDBMS); ...READ MORE Hey Dheeraj, following are the differences between ...READ MORE The major 5 benefits I see for ...READ MORE Both are the best cloud service providers, ...READ MORE They allow the developers to get valuable ...READ MORE Hi, An Architect role is responsible for turning a concept, ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/26464/which-of-them-is-better-between-java-vs-python-and-why?show=40916
CC-MAIN-2019-51
refinedweb
825
73.88
I have a Grails application deployed on Tomcat 7. Here is my code fragment which tries to create a new file: def path = "/var/csvs" + file.fileItem.fileName def fileInputStream = file.inputStream File f = new File(path) if (!f.exists()) { f.createNewFile() } I get this exception: Permission denied. Stacktrace follows: java.io.IOException: Permission denied at java.io.File.createNewFile(File.java:1006) I have already given rwx permission to tomcat7(the user under which tomcat is running). So why am I not able to create a new file? Is it that "/var/csvs" is trying to create a file relative to the tomcat webapps directory? If yes then how should I create a file at /var/csvs (where csvs is a folder I have created under the /var)? EDIT: Here is my permissions on /var/csvs folder: [email protected]:/# ls -ld /var/csvs drwxrwxrwx 2 tomcat7 tomcat7 4096 Jun 3 15:44 /var/csvs So clearly tomcat7 is the owner of that directory and mode is 777 Shouldn't the code be like def path = "/var/csvs/" + file.fileItem.fileName // notice the trailing fwd slash otherwise it is very likely you're trying to write into the /var directory.
http://databasefaq.com/index.php/answer/42467/tomcat-grails-javaioioexception-permission-denied-while-creating-new-file-from-tomcat-in-grails
CC-MAIN-2018-13
refinedweb
200
66.44
import sys sys.path.append('../code') from init_mooc_nb import * init_notebook() import scipy.sparse.linalg as sl my = 0.5 * (pauli.sys0 + pauli.sysz) s0s0sz = np.kron(pauli.s0s0, pauli.sz) s0szsz = np.kron(pauli.s0sz, pauli.sz) mys0 = np.kron(my, pauli.s0) s0s0sx = np.kron(pauli.s0s0, pauli.sx) s0s0sy = np.kron(pauli.s0s0, pauli.sy) szsxsz = np.kron(pauli.szsx, pauli.sz) s0sysz = np.kron(pauli.s0sy, pauli.sz) sxsxsz = np.kron(pauli.sxsx, pauli.sz) sysxsz = np.kron(pauli.sysx, pauli.sz) def make_qshe_sc(l=40, w=10, lead=False): def shape(pos): (x, y) = pos return (1.0 * y**2 / l**2 + 1.0 * x**2 / w**2) <= 0.25() syst = kwant.Builder() syst[lat.shape(shape, (0, 0))] = onsite syst[kwant.HoppingKind((1, 0), lat)] = hopx syst[kwant.HoppingKind((0, 1), lat)] = hopy if lead: sym = kwant.TranslationalSymmetry((0, 1)) lead = kwant.Builder(sym) lead[lat(0, 0)] = 1.5 * p.B * s0szsz lead[kwant.HoppingKind((0, 1), lat)] = -p.B * s0szsz syst.attach_lead(lead) syst.attach_lead(lead.reversed()) return syst.finalized() def make_qshe_sc_ribbon(w=3): def ribbon_shape(pos): (x, y) = pos return (0 <= x < w)() sym = kwant.TranslationalSymmetry((0, 1)) syst = kwant.Builder(sym) syst[lat.shape(ribbon_shape, (0, 0))] = onsite syst[kwant.HoppingKind((1, 0), lat)] = hopx syst[kwant.HoppingKind((0, 1), lat)] = hopy return syst.finalized() def make_2d_pwave(w, l): def shape(pos): (x, y) = pos return (1.0 * y**2 / l**2 + 1.0 * x**2 / w**2) <= 0.25 def hopx(site1, site2, p): (x1, y1) = site1.pos (x2, y2) = site2.pos phi = p.phase(0.5 * (x1 + x2), 0.5 * (y1 + y2)) return -p.t * pauli.sz + 1j * p.delta * \ (np.cos(phi) * pauli.sx + np.sin(phi) * pauli.sy) def hopy(site1, site2, p): (x1, y1) = site1.pos (x2, y2) = site2.pos phi = p.phase(0.5 * (x1 + x2), 0.5 * (y1 + y2)) return -p.t * pauli.sz - 1j * p.delta * \ (np.cos(np.pi / 2 + phi) * pauli.sx + np.sin(np.pi / 2 + phi) * pauli.sy) def onsite(site1, p): return (4 * p.t - p.mu) * pauli.sz lat = kwant.lattice.square() syst = kwant.Builder() syst[lat.shape(shape, (w / 2 - 1, 0))] = onsite syst[kwant.HoppingKind((1, 0), lat)] = hopx syst[kwant.HoppingKind((0, 1), lat)] = hopy return syst.finalized() def bhz_slab(l, w, h): lat = kwant.lattice.general(np.identity(3)) syst = kwant.Builder() def shape(pos): (x, y, z) = pos return (0 <= z < h) and (1.0 * y**2 / l**2 + 1.0 * x**2 / w**2) <= 0.25 def onsite(site, p): (x, y, z) = site.pos phi = p.phase(x, y) return (p.C + 2 * p.D1 + 4 * p.D2) * s0s0sz + (p.M + 2 * p.B1 + 4 * p.B2) * s0szsz + \ p.delta * (np.cos(phi) * s0s0sx + np.sin(phi) * s0s0sy) def hopx(site1, site2, p): return - p.D2 * s0s0sz - p.B2 * s0szsz + p.A2 * 0.5j * sxsxsz def hopy(site1, site2, p): return - p.D2 * s0s0sz - p.B2 * s0szsz + p.A2 * 0.5j * sysxsz def hopz(site1, site2, p): return - p.D1 * s0s0sz - p.B1 * s0szsz + p.A1 * 0.5j * szsxsz syst[lat.shape(shape, (0, 0, 0))] = onsite syst[kwant.HoppingKind((1, 0, 0), lat)] = hopx syst[kwant.HoppingKind((0, 1, 0), lat)] = hopy syst[kwant.HoppingKind((0, 0, 1), lat)] = hopz return syst.finalized() def calc_energies(syst, p, num_orbitals, num_states): ham = syst.hamiltonian_submatrix(args=[p], sparse=True).tocsc() energies, states = sl.eigsh(ham, sigma=0, k=num_states) densities = np.linalg.norm( states.reshape(-1, num_orbitals, num_states), axis=1)**2 return energies, states, densities:05.936382. MoocVideo("YVGlfejNH90", src_location="7.1-intro") By now, we have seen examples of how the topological properties of the bulk of a material can give birth to new physical properties at its edges, and how these edge properties cannot exist without a bulk. This is the essence of bulk-edge correspondence. For example, the unpaired Majorana bound states at the edges of a Kitaev chain exist because they are separated by the bulk of the chain. Observe that the systems we have studied so far all had something in common: the topologically protected edge states were separated by a bulk that is one dimension higher than the dimension of the edge states. For example, the 0D Majorana bound states are separated by the 1D bulk of a Kitaev chain, and 1D chiral edge modes are separated by a 2D Chern insulator. In this week, we will see that this does not need to be the case. The dimension of the bulk does not need to be one higher than the dimension of the topologically protected edge. Any dimension higher than the dimension of the edge works equally well. We will see how this simple insight opens new avenues in the hunt for topological protection. In the past weeks, we have studied two systems that appear very different, but where topology showed up in a very similar way. First, let's consider the quantum spin-Hall insulator. As we saw two weeks ago, it is characterized by a fermion parity pump: if you take a Corbino disk made out of a quantum spin-Hall insulator and change the flux by half a normal flux quantum, that is by $h/2e$, one unit of fermion parity is transferred from one edge of the sample to the other. Secondly, let us consider a one-dimensional topological superconductor, like we studied in weeks two and three. If such a system is closed into a Josephson ring, and the flux through the ring is advanced by one superconducting flux quantum $h/2e$, the fermion parity at the Josephson junction connecting the two ends changes from even to odd, or viceversa. This is the $4π$ Josephson effect, one of the main signatures of topological superconductivity. Note that the change in flux is equal to $h/2e$ in both cases, since a superconducting flux quantum $h/2e$ is half of the normal flux quantum $h/e$. This suggest that once you have a quantum-spin Hall insulator, you are only one small step away from topological superconductivity and Majoranas. The only ingredient that is missing is to introduce superconducting pairing on the quantum spin-Hall edge. But this is easy to add, for instance by putting a superconductor on top of the outer edge of our quantum spin-Hall Corbino disk: The superconductor covers the entire quantum spin-Hall edge except for a small segment, which acts as a Josephson junction with a phase difference given by $\phi = 2e\Phi/\hbar$, where $\Phi$ is the magnetic flux through the center of the disk. We imagine that the superconductor gaps out the helical edge by proximity, which means that Cooper pairs can tunnel in and out from the superconductor into the edge. In order for this to happen, a conventional $s$-wave superconductor is enough. We will not repeat our pumping experiment, that is increasing the flux $\Phi$ by $h/2e$. We know that one unit of fermion parity must be transferred from the inner edge of the disk to the outer edge. However, the only place where we can now find a zero-energy state is the Josephson junction, because the rest of the edge is gapped. From the point of view of the superconducting junction, this means that advancing the phase difference $\phi$ by $2\pi$, the ground state fermion parity of the junction changes. Recalling what we learned in the second and third weeks, we can say that the Josephson effect is $4\pi$-periodic. question = ("What happens to the Josephson current in the setup shown above if you remove the inner edge of the Corbino disk?") answers = ["The pumping argument fails and the Josephson effect becomes $2\pi$ periodic.", "Then you can no longer apply a flux through the disk.", "The Josephson effect remains $4\pi$ periodic, but the fermion parity becomes fixed.", "Nothing changes if the inner edge of the Corbino disk is removed."] explanation = ("Josephson current is a local effect, so it cannot be affected by a removal of the inner edge. " "When you insert a superconducting flux quantum into the ring, the fermion parity of the edge becomes odd. " "The extra fermion comes from the gapped bulk of QSHE, which now acquires one broken " "Kramers pair. That is allowed since there is half a normal flux quantum penetrating the bulk, " "and Kramers theorem doesn't apply anymore.") MoocMultipleChoiceAssessment(question=question, answers=answers, correct_answer=3, explanation=explanation) We know that the $4\pi$-periodicity of the Josephson effect can always be associated with the presence of Majorana zero modes at the two superconducting interfaces of the Josephson junction. However, if you compare the system above with the Josephson ring studied in week three, you will notice an important difference. In that case, the Josephson junction was formed by an insulating barrier. Now on the other hand, the two superconducting interfaces are connected by the quantum spin-Hall edge. This means that our Majoranas are connected by a gapless system, and therefore always strongly coupled. In order to see unpaired Majoranas, or at least weakly coupled ones, we need to gap out the segment of the edge forming the Josephson junction. To gap it out, we can try to place another superconductor in the gap. Unfortunately, this doesn't really help us, because it results in the formation of two Josephson junctions connected in series, and we only want one. However, we know that the edge modes of the quantum spin-Hall insulator are protected from backscattering by time-reversal symmetry. To gap them out, we need to break time-reversal symmetry. Since a magnetic field breaks time-reversal symmetry, we can gap out the edge modes by placing a magnet on the segment of the edge between the two superconductors: In the sketch above, you see two Majoranas drawn, one at each interface between the magnet and the superconductor. Their wavefunctions decay as we move away from the interfaces. As Carlo Beenakker mentioned in the introductory video, these Majoranas are quite similar to those we found at the ends of quantum wires. To understand them in more detail, note that the magnet and the superconductor both introduce a gap in the helical edge, but through a completely different physical mechanism. The magnet flips the spin of an incoming electron, or hole, while the superconductor turns an incoming electron with spin up into an outgoing hole with spin down. These two different types of reflection processes combine together to form a Majorana bound state. We can capture this behavior with the following Bogoliubov-de Gennes Hamiltonian for the edge:$$H_\textrm{BdG}=(-iv\sigma_x \partial_x-\mu)\tau_z+m(x)\,\sigma_z+\Delta(x)\,\tau_x.$$ The first term is the edge Hamiltonian of the quantum spin-Hall effect, describing spin up and down electrons moving in opposite direction, together with a chemical potential $\mu$. The matrix $\tau_z$ acts on the particle-hole degrees of freedom, doubling the normal state Hamiltonian as usual. The second term is the Zeeman term due to the presence of the magnet. Finally, the last term is the superconducting pairing. The strength of the Zeeman field $m(x)$ and the pairing $\Delta(x)$ both depend on position. At a domain wall between the superconductor and the magnet, when the relevant gap for the edge changes between $m$ and $\Delta$, the Hamiltonian above yields a Majorana mode. This is shown below in a numerical simulation of a quantum spin-Hall disk. The left panel shows the edge state of the disk without any superconductor or magnet. In the right panel we cover one half of the disk by a superconductor and the other by a magnet, and obtain two well-separated Majoranas: l = 60 w = 60 sys2 = make_qshe_sc(l, w) p = SimpleNamespace(A=0.5, B=1.00, D=0.1, M=0.5) p.gaps = lambda x, y: [(y < 0)*0.0, (y >= 0)*0.0] energies0, states0, densities0 = calc_energies(sys2, p, num_orbitals=8, num_states=10) p.gaps = lambda x, y: [(y < 0)*0.2, (y >= 0)*0.3] energies, states, densities = calc_energies(sys2, p, num_orbitals=8, num_states=10) phi = np.linspace(-np.pi, np.pi, 51) x = (w + 0.5) / 2 * np.cos(phi) y = (l + 0.5) / 2 * np.sin(phi) fig = plt.figure(figsize=(9, 3.5)) ax1 = fig.add_subplot(122) gap_B = ax1.fill_between(x[:26], 0, y[:26], facecolor='gold', alpha=0.1) gap_Sc = ax1.fill_between(x[26:], 0, y[26:], facecolor='blue', alpha=0.1) kwant.plotter.map(sys2, densities[:, 0], colorbar=False, ax=ax1, cmap='gist_heat_r') plt.plot(x, y, 'k-', lw=2) text_style = dict(fontsize=16, arrowprops=dict(arrowstyle="-", facecolor='black', lw=0)) plt.annotate('$E_Z$', xytext=(-w/20, l/5), xy=(0, l/3), **text_style) plt.annotate('$\Delta$', xytext=(-w/20, -l/4), xy=(0, -l/3), **text_style) ax1.set_yticks([]) ax1.set_xticks([]) ax1.set_ylim(-l/2-3, l/2+3) ax1.set_xlim(-w/2-3, w/2+3) pot = np.log(abs(energies0[0])) // np.log(10.0) - 1 fac = abs(energies0[0])*10**(-pot) ax1.set_title('Majoranas, $E = $' + scientific_number(abs(energies[0]))) ax0 = fig.add_subplot(121) kwant.plotter.map(sys2, densities0[:, 0], colorbar=False, ax=ax0, cmap='gist_heat_r') ax0.set_yticks([]) ax0.set_xticks([]) ax0.set_ylim(-l/2-3, l/2+3) ax0.set_xlim(-w/2-3, w/2+3) ax0.set_title('Edge state, $E = $' + scientific_number(abs(energies0[0]))) plt.plot(x, y, 'k-', lw=2) plt.show() The density of states plot of the lowest energy state reveals one Majorana mode at each of the two interfaces between the magnet and the superconductor. This clearly shows how is it possible to obtain 0D topologically protected states (the Majorana modes) from a $2D$ bulk topological phase (the quantum spin Hall insulator). All we had to do was to add the appropriate ingredients (the superconductor and the magnet). Let us now move on to Majoranas in vortices, as discussed by Carlo Beenakker in the introductory video. We will need a model for a 2D topological superconductor. How do we obtain it? It turns out that the method we used to construct 2D Chern insulators in week 4, namely stacking 1D Kitaev chains and coupling them, can also be used to construct 2D topological superconductors. That isn't very surprising though, is it? Remember that back then, we told you to forget that the Kitaev model was really a superconductor. Bearing that in mind, it comes as no surprise that stacking 1D superconductors gives us a 2D superconductor. So let's look back at the Hamiltonian we obtained for a Chern insulator by coupling a stack of Kitaev chains:$$H_\textrm{2D}(\mathbf{k})=-(2t\cos{k_x}+\mu)\,\tau_z+\Delta\sin{k_x}\tau_y-2\gamma\,(\cos{k_y}\tau_z+\sin{k_y}\,\tau_x).$$ Those of us who are careful would want to check that the above Hamiltonian is indeed a superconductor, in particular that the terms coupling different chains do not spoil particle-hole symmetry. And indeed if we consider the operator $\mathcal{P}=\tau_x \mathcal{K}$ with $\mathcal{K}$ the complex conjugation operator, we find that the Bloch Hamiltonian obeys $H_\textrm{2D}(\mathbf{k}) = -\tau_x H^*_\textrm{2D}(-\mathbf{k}) \tau_x$, precisely the symmetry obeyed by the Kitaev chain, extended to two dimensions (if you do not remember how to apply an anti-unitary symmetry in momentum space, you can go back to week 1 and look at the original derivation). The Hamiltonian above is quite anisotropic - it looks different in the $x$ and $y$ directions, a consequence of the way we derived it in week four. For our purposes, however, it is convenient to make it look isotropic. Thus, we tweak the coefficients in $H$ to make it look similar in the $x$ and $y$ directions. This is fine as long as we do not close the gap, because the topological properties of $H$ remain unchanged. In this way we arrive at the canonical Hamiltonian of a so-called $p$-wave superconductor:$$H(k_x,k_y)=-[2t\,(\cos{k_x}+\cos{k_y})+\mu]\,\tau_z+\Delta\,(\sin{k_x}\tau_y-\sin{k_y}\tau_x).$$ Apart from looking more symmetric between the $x$ and $y$ directions, the Hamiltonian clearly separates normal hopping, which is proportional to $t$, and superconducting pairing, which is proportional to $\Delta$. This superconductor is $p$-wave because the pairing is linear in momentum, just like in the Kitaev chain. This can be seen explicitly by expanding $H$ around $\mathbf{k}=\mathbf{0}$, which gives$$H(k_x,k_y)\approx [t\,(k_x^2+k_y^2)-\mu+4 t]\tau_z+[i \Delta(k_x+i k_y)\tau_++\textrm{h.c.}],$$ where $\tau_+=(\tau_x+i\tau_y)/2$. Note that the pairing is proportional to $k_x+ik_y$, and it breaks both time-reversal and inversion symmetries. Even though we have reinterpreted the Hamiltonian $H$ as a superconductor, it is still originally a Chern insulator. This means that the system is still characterized by a bulk Chern number, which determines the presence of chiral edge states. A chiral edge state can be described by a simple effective Hamiltonian, equivalent to that of a quantum Hall edge:$$H_\textrm{edge}=\hbar v k$$ with $v$ the velocity and $k$ the momentum along the edge. Note that the edge Hamiltonian maintains the particle-hole symmetry of the bulk: for every state with energy $E$ and momentum $k$ there is a state with energy $-E$ and momentum $-k$. We are now ready to see how unpaired Majoranas can appear in a 2D $p$-wave superconductor. So far we have considered a uniform superconducting pairing $\Delta$, with constant amplitude and phase. This is an idealized situation, which corresponds to a perfect superconductor with no defects. If you apply a small magnetic field to a superconducting film, or if there are defects in the material, a vortex of supercurrent can form to lower the free energy of the system. In a vortex, there is a supercurrent circulating in a small area around the defect or the magnetic field lines penetrating the superconductor. The magnetic flux enclosed by the vortex supercurrent is equal to a superconducting flux quantum $h/2e$. The amplitude $\Delta$ of the superconducting pairing is suppressed in the core of the vortex, going to zero in its center, and the superconducting phase winds by $2\pi$ around a closed path surrounding it. The situation is sketched below: Because the pairing $\Delta$ goes to zero in the middle of the vortex, there can be states with an energy smaller than $\Delta$ which are localized at the vortex core. We now want to see whether it is possible to have a non-degenerate zero energy solution in the vortex - because of particle-hole symmetry, this would be an unpaired Majorana mode! To compute the spectrum of the vortex we could introduce a position dependent-phase for $\Delta$ in the Hamiltonian of the superconductor, and solve it for the energy spectrum by going through quite some algebra. But as usual in this course, we will take a shortcut. Our shortcut comes from answering the following question: how is the spectrum of the chiral edge states affected by introducing a vortex in the middle of the superconductor? From week one, we know that changing the flux through a superconducting ring by a flux quantum changes the boundary condition from periodic to antiperiodic, or viceversa. A vortex has precisely the same effect on the chiral edge states. Therefore, in the presence of a vortex, the allowed values $k_n$ of momentum in a disk shift by $\pi/L$, with $L$ the length of the edge. The energy levels depend linearly on momentum and are shifted accordingly,$$ E_n\,\to\, E_n + \hbar v \pi / L. $$ Now, with or without the vortex, the spectrum must be symmetric around $E=0$ because of particle-hole symmetry. The energy levels $E_n$ correspond to standing waves and are equally spaced, with spacing given by $2\hbar v \pi / L$. There are only two such spectra consistent with particle-hole symmetry, $E_n = 2\pi\,n\, \hbar v / L$ and $E_n = 2\pi\,(n+1/2)\, \hbar v / L$. Which one of the two spectra correspond to the presence of a vortex? To answer this question, observe that the energy spectrum $E_n = 2 \pi\,n\,\hbar v / L$ includes a zero-energy solution, which is an unpaired Majorana mode at the edge! This is impossible unless there is somewhere a place to have a second zero-energy solution. And the only other possible place where we could have a zero-energy solution is the core of the vortex. Just by looking at the edge state momentum quantization, we have thus demonstrated that a vortex in a $p$-wave superconductor must come with a Majorana. Below, we plot the wave function of the lowest energy state in a $p$-wave disk with a vortex in the middle. The lowest energy wavefunction is an equal superposition of the two Majorana modes. Here you can see that half of it is localized close to the vortex core and half of it close to the edge. import matplotlib.cm import matplotlib.colors as mcolors colors = matplotlib.cm.gist_heat_r(np.linspace(0, 1, 128)**.25) gist_heat_r_rescaled = mcolors.LinearSegmentedColormap.from_list('gist_heat_r_rescaled', colors) p = SimpleNamespace(t=1.0, mu=0.4, delta=0.5, phase=lambda x, y: np.angle(x+1j*y)) l = 60 w = 60 syst = make_2d_pwave(w, l) energies, states, densities = calc_energies(syst, p, num_orbitals=2, num_states=10) kwant.plotter.map(syst, densities[:, 0], cmap=gist_heat_r_rescaled, show=False, colorbar=False) plt.show() The wave function is not zero in the bulk between the edge and the vortex because of the relatively small size of the system. The separation between edge and vortex, or between different vortices, plays the same role as the finite length of a Kitaev chain, i.e. it splits the Majorana modes away from zero energy by an exponentially small amount. question = ("What happens if you add a second vortex to the superconductor? " "Imagine that the vortices and edge are all very far away from each other") answers = ["The second vortex has no Majorana.", "Both vortices have a Majorana, and the edge has two Majoranas.", "The Majorana mode at the edge goes away, and each vortex has its own Majorana.", "Vortices can only be added in pairs because Majoranas only come in pairs."] explanation = ("The energy spectrum of the edge is shifted by $\hbar v \pi/L$ by the addition of a second vortex, " "so the edge has no Majoranas now. The first vortex is not affected, and we know that it has a Majorana. " "And so, of course, the second vortex must have a Majorana as well.") MoocMultipleChoiceAssessment(question=question, answers=answers, correct_answer=2, explanation=explanation) Unfortunately, superconductors with $p$-wave pairing are very rare, with mainly one material being a good candidate. But instead waiting for nature to help us, we can try to be ingenious. As Carlo mentioned, Fu and Kane realized that one could obtain an effective $p$-wave superconductor and Majoranas on the surface of a 3D TI. We already know how to make Majoranas with a 2D topological insulator. Let us now consider an interface between a magnet and a superconductor on the surface of a 3D topological insulator. Since the surface of the 3D TI is two dimensional, such an interface will be a one dimensional structure and not a point defect as in the quantum spin-Hall case. The Hamiltonian of the surface is a very simple extension of the edge Hamiltonian, $\sigma_x k_x + \sigma_y k_y$ instead of just $\sigma_x k_x$. We can imagine that $k_y$ is the momentum along the interface between the magnet and the superconductor, and it is conserved. The effective Bogoliubov-de Gennes Hamiltonian is$$H_\textrm{BdG}=(-i\sigma_x \partial_x+ \sigma_y k_y-\mu)\tau_z+m(x)\,\sigma_z+\Delta(x) \tau_x.$$ What is the dispersion $E(k_y)$ of states along the interface resulting from this Hamiltonian? Well, for $k_y=0$ we have exactly the Hamiltonian of the magnet/superconductor interface in the quantum spin-Hall case, which had a zero mode. So we know that the interface is gapless. The magnet breaks time-reversal symmetry, so we will have a chiral edge state, with energy $E$ proportional to $k_y$. Just like in the $p$-wave superconductor case! At this point, analyzing the case of a vortex is very simple. We just have to reproduce the geometry we analyzed before. That is, we imagine an $s$-wave superconductor disk with a vortex in the middle, surrounded by a magnetic insulator, all on the surface of a 3D topological insulator: The introduction of a vortex changes the boundary conditions for the momentum at the edge, like in the $p$-wave case, and thus affects the spectrum of the chiral edge states going around the disk. Following the same argument as in the $p$-wave case, particle-hole symmetry dictates that there is a Majorana mode in the vortex core on a 3D TI. Interestingly, the vortex core is spatially separated from the magnet - so the vortex should contain a Majorana mode irrespective of the magnet that was used to create the chiral edge mode. In fact, the magnet was only a crutch that we used to make our argument. We can now throw it away and consider a vortex in a superconductor which covers the entire surface of the topological insulator. To confirm this conclusion, below we show the result of a simulation of a 3D BHZ model in a cube geometry, with a vortex line passing through the middle of the cube. To make things simple, we have added superconductivity everywhere in the cube, and not just on the surface (nothing prevents us from doing this, even though in real life materials like Bi$_2$Te$_3$ are not naturally superconducting). import matplotlib.cm import matplotlib.colors as mcolors colors = matplotlib.cm.gist_heat_r(np.linspace(0, 1, 128)) colors[:, 3] = np.linspace(0, 1, 128) gist_heat_r_transparent = mcolors.LinearSegmentedColormap.from_list('gist_heat_r_transparent', colors) l, w, h = 10, 10, 25 syst = bhz_slab(l, w, h) p = SimpleNamespace(A1=0.5, A2=0.5, B1=0.5, B2=0.5, C=-0.2, D1=0.1, D2=0.0, M=-0.2, delta=0.15, phase=lambda x, y: np.angle(x + 1j * y)) energies, states, densities = calc_energies(syst, p, num_orbitals=8, num_states=10) fig = plt.figure(figsize=(9, 3.5)) ax0 = fig.add_subplot(121, projection='3d') kwant.plot(syst, ax=ax0, site_size=0.3) ax0.set_xlim(-w/2-2, w/2+2) ax0.set_ylim(-l/2-2, l/2+2) ax0.set_yticks([]) ax0.set_xticks([]) ax0.set_zlim3d([0, h]) ax0.set_zticks([0, h]) ax0.set_zticklabels(['$0$', '$%d$' % h]) densities /= np.max(densities, axis=0, keepdims=True) ax1 = fig.add_subplot(122, projection='3d') kwant.plotter.plot(syst, site_color=densities[:, 0], ax=ax1, cmap=gist_heat_r_transparent, colorbar=False, site_lw=0) ax1.set_xlim(-w/2-2, w/2+2) ax1.set_ylim(-l/2-2, l/2+2) ax1.set_yticks([]) ax1.set_xticks([]) ax1.set_zlim3d([0, h]) ax1.set_zticks([0, h]) ax1.set_zticklabels(['$0$', '$%d$' % h]) plt.show() In the right panel, you can see a plot of the wavefunction of the lowest energy state. You see that it is very well localized at the end points of the vortex line passing through the cube. These are precisely the two Majorana modes that Carlo Beenakker explained at the end of his introductory video. MoocVideo("B7lMz-NrKec", src_location="7.1-summary") Questions about what you just learned? Ask them below! MoocDiscussion("Questions", "Majoranas in topological insulators") Discussion Majoranas in topological insulators is available in the EdX version of the course.
https://nbviewer.jupyter.org/url/topocondmat.org/notebooks/w7_defects/ti_majoranas.ipynb
CC-MAIN-2019-18
refinedweb
4,594
58.48
Please help me in solving this question Task 1. Create a class to store details of student, such as rollno, name, course joined, and fee paid so far. Assume courses are C# and ASP.NET with course fees being 2000 and 3000, respectively. (3 marks) - Provide a constructor that takes rollno, name and course. Provide the following methods: Payment(amount) feepaid += amount; Print() : {to print rollno, name, course and feepaid} Due amount if the student pays only 1000 as a first payment. TotalFee -= feepaid Declare an object S and call the above methods using Student s = new Student(1, "John", "c#"); Task 2- Complete the program below by adding class customer that uses overloaded constructors: A. Customer(string firstName, string lastName) B. public Customer(string firstName) using System; namespace CustomerApp { public class Customer { // here you need to add class members (instance variables, constructors and methods) } } Here the program where you test the Customer class. using System; namespace CustomerApp { class Program { static void Main(string[] args) { Customer customer1 = new Customer("Joe", "Black"); Customer customer2 = new Customer("Jim"); Console.WriteLine("{0} {1}", customer1.FirstName, customer1.LastName); Console.WriteLine("{0} {1}", customer2.FirstName, customer2.LastName); Console.ReadLine(); } } }
https://www.daniweb.com/programming/software-development/threads/415100/please-help-me
CC-MAIN-2017-34
refinedweb
194
56.45
This is the editorial for the Unofficial Div 4 Round #2 by ssense SlavicG. We hope everyone had fun and enjoyed the contest! Video Editorial by HealthyUG Problem A — Catching the Impostor. Solution using frequency array: Initialize an array of length $$$n$$$ with all the elements 0. This will be our frequency array. For every element we have in our list, we increase its frequency by one in the frequency array, $$$freq[arr[i]]+=1$$$. After we do this, we iterate through the frequency array and see how many elements have a frequency higher than 0 (it means they were mentioned in the list), let this number be $$$cnt$$$. Now if $$$cnt = n-1$$$ them we know for sure that who is the impostor, because all the other people are mentioned in the list. If $$$cnt ≠ n-1$$$ then we can't for sure know who is the impostor, so the answer is no. Solution using set: We introduce all the elements of the list in a set. A property of a set is that it doesn't contain a multiple elements of the same value. We just need to see how big the size of the set (how many distinct people are on the list). If this size is equal to n-1 then the answer is "YES", else "NO". #include "bits/stdc++.h" using namespace std; int main() { set<int> s; int n, m; cin >> n >> m; for(int i = 0;i < m;i++){ int x; cin >> x; s.insert(x); } if(s.size() == n - 1){ cout << "YES"; }else{ cout << "NO"; } } Problem B — Rabbit Game We iterate from the start and see how many carrots the first rabbit can eat, let this be $$$cnt_1$$$. We iterate from the end to see how many carrots the second rabbit can eat, let this be $$$cnt_2$$$. If $$$cnt_1 + cnt_2 > n$$$ then some carrots will be eaten by both rabbits, but this is not possible because they stop whenever they meet. The answer is $$$min(n, cnt_1+cnt_2)$$$. #include "bits/stdc++.h" using namespace std; int main() { int n; cin >> n; int a[n]; for(int i=0;i < n;i++) cin >> a[i]; int Rabbit1 = 1; for(int i = 1 ;i < n;i++) { if(a[i] >= a[i-1]){ Rabbit1++; }else{ break; } } int Rabbit2 = 1; for(int i = n-2;i>=0;i--) { if(a[i] >= a[i+1]){ Rabbit2++; }else{ break; } } cout << min(n , Rabbit2 + Rabbit1); } than we can find the answer by looking at the cases depending on n % 4, or we can use a formula. In either case, below is proof of why it works. Let's solve this problem for a more general case of rectangular grid $$$n \times m$$$, with rows and columns counted starting from $$$1$$$. Let even cell denote a cell with both even coordinates, all shown on the right drawing. Consider $$$X = \left \lfloor{ \frac{n}{2} }\right \rfloor \cdot \left \lfloor{ \frac{m}{2} }\right \rfloor$$$ squares with even bottom-right corners, as shown on the left drawing. Since the squares are disjoint and Bob colors only one cell per move, he can block Alice from at most one of those $$$X$$$ squares per turn. Alice starts so she can color $$$\left \lceil{ \frac{X}{2} }\right \rceil$$$ of the $$$X$$$ squares and thus guarantee the score of $$$4 \cdot \left \lceil{ \frac{X}{2} }\right \rceil$$$, no matter what Bob does. There are $$$X$$$, even cells. Alice covers exactly one of them per move because any possible $$$2 \times 2$$$ square contains exactly one even cell. If Bob colours an even cell every time too, they will spend $$$X$$$ moves in total to block all even cells, and then Alice won't be able to move. This way Bob can guarantee that Alice makes at most $$$\left \lceil{ \frac{X}{2} }\right \rceil$$$ moves and gets the score of $$$4 \cdot \left \lceil{ \frac{X}{2} }\right \rceil$$$. Alice can make her score to be at least $$$4 \cdot \left \lceil{ \frac{X}{2} }\right \rceil$$$ and Bob can make Alice's score to be at most $$$4 \cdot \left \lceil{ \frac{X}{2} }\right \rceil$$$. Her score is thus exactly: Bob's score must be $$$B = n \cdot m - A$$$ because he can always finish the whole grid. We can compare $$$A$$$ with $$$B$$$ to get the answer. Alternatively, you can consider cases depending on $$$n \bmod 4$$$ and $$$m \bmod 4$$$. For example, if $$$n = m = 4 \cdot k + 1$$$ then $$$X = 2k \cdot 2k = 4 \cdot k^2$$$ so $$$A = 4 \cdot (2 \cdot k^2) = 8 \cdot k^2$$$. That's smaller than half of full grid area $$$(4 \cdot k + 1)^2$$$ so Bob wins. (We don't claim anything about the optimal moves of Alice and Bob. In particular, Alice doesn't have to choose exactly those squares marked on the drawing.) #include "bits/stdc++.h" using namespace std; int main() { int t; cin >> t; while(t--) { int n; cin >> n; if(n%4==2){ cout << "Alice\n"; }else if(n%2==0){ cout << "Draw\n"; }else{ cout << "Bob\n"; } } } #include "bits/stdc++.h" using namespace std; int main() { long long n; cin >> n; long long cover = ((n / 2) * (n / 2) + 1) / 2; long long as = 4 * cover; long long bs = n * n - as; if (as == bs) cout << "Draw\n"; else if (as < bs) cout << "Bob\n"; else cout << "Alice\n"; }
https://codeforces.com/topic/84719/?locale=ru
CC-MAIN-2021-21
refinedweb
906
75.64
User talk:Rainchild From Uncyclopedia, the content-free encyclopedia edit Welcome! Hello, Rainchild,:Rainchild!) 09:31, February 3, 2012 (UTC) edit Your happy monkey topic Your topic my dear Rainchild is: Saturday (day of the week). - Write the article on your namespace - I'll be judging the articles based on creativity, originality and cleverness. - Good luck, you have until tomorrow night (23.59 UTC, 6:59PM ET) to finish. (happy monkey just ate your shoe laces and liked them too much) --ShabiDOO 11:48, February 11, 2012 (UTC) edit Happy monkey Hey Rainchild how is it going? Nice start to your article. I've formatted your article a little bit, as this is your first contest and you ended up with a rather challenging topic to say the least. I've added a generic image and put some links into the article. If you can...try to have a link in at least each paragraph. I find it best to link them to good and or funny articles. I'm going to give you an extra 6 hours or so to format your article (add images and links to your page so that you get the fair judging that you should get).. Feel free to make more of what I've added, remove what I added or move around what I added, I won't be the least bit offended if you reverse all of my edits, and you should probably change my edits to reflect how you see your article in terms of images links if you don't like them.:38, February 12, 2012 (UTC) edit It's Saturday, Saturday, gotta get down on Saturday But which seat can I take??? lol Thanks for writing in the competition, I just judged your article and it is awesome! If you had some good pics in there and if it had been a little longer, you were shooting for a 9/10! I see great funniness potential in you, I hope you stick around, and by order of the cabal you have been nominated for Noob of the Month! Congrats and if you have any question or just want to stalk me, don't be shy! Mattsnow 17:28, February 14, 2012 (UTC) edit Thanks and Noobie Questions Many thanks to Matt and Shabidoo for their helpful words. (I'm not kidding.) Alas, I don't know a) how to add an image to an article and b) where to find images that are guaranteed out of copyright. Am thinking about writing an article called "Moose Nasal Mucous: The New Caviar." Actually, I'm not. I'll try to write something more tasteful, honest! Will try to keep up. --Rainchild - Others will come by to answer your questions better. Pics are easy to add, just follow the code that you see when you edit. Practice with alternating between "thumb" and "frame". If you want it on the left you put a left on it, in the center add center, but if you want it on the right nothing is needed, it goes there automatically as long as you have a thumb or frame writeen in. Copyright doesn't make a difference here, satirical fair use covers that unless someone holding the copyright complains loudly enough. Pics can be found in most categories, and categories that say Image have lots more. Such as this or this. Pics can also be found on "Special Pages" go to "Unused pics" or something like that. 'nuff data for now, but practice makes purefect. Aleister 19:25 16-2-'12 - p.s. Then there's this place for other free images. - Yep, click on "Upload file" on the left of the screen, then select a pic you have in your computer, the pic will be uploaded here. Then just go to any article that has a pic in it and copy what you see when you are on the screen were you "edit" (write), as you can see right here with the pic on the left. I hope that helps :D, the best way to learn how to do something is to find an article that has what you want to do in it, then copy it for your own article. Gotta go punch the fuck out of a mammoth, later! Mattsnow 00:30, February 17, 2012 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Rainchild
CC-MAIN-2014-10
refinedweb
723
80.11
Leo Sutic wrote: > this is my understanding of the sort-of concensus we have arrived at. > Some issues are still being debated, and I believe I have marked those > as such. <snip/> > We've agreed on removing release() from the CM interface, and the rest > of this sumary details how we get the same functionality without > that method. > > We've agreed that all components returned from a CM are thread safe. I must have missed this (in my desire not to read any emails containing "ROLE",hint vs. "ROLE/hint" :)... so I'm going to ask: why must they be thread safe? As I've said before it is possible for the container/CM/pool to use the VM's GC to detect when a client has lost all references to a component. This allows the component to be returned to a pool (recycled), or be disposed automatically once the client loses all references to it. If the component is only taking up memory this is not an issue since it is exactly when the memory starts to run out that the GC will release (recycle or dispose of) the component. The problems that were being brought up dealt with the component holding onto scarce resources (e.g. filehandles and database connections) which I talked about in my last email. More... > Borrowing from Robert Mouat's mail: > > transaction: > > this is the period during which a component can be expected to hold > a [non-memory] resource. This depends a lot on the interface, and > I'm going to divide these into 3 categories: > > 1. The interface defines no transaction. e.g. those that can be > implemented in a thread-safe manner. e.g. > > interface DocumentStore > { > public Document getDocument( String ref ); > public void putDocument( String ref, Document doc ); > } > > 2. The interface has its own transaction delineators. > e.g. open/close, or begin/end. These clearly define when the > transaction begins and ends, and there is no reason to suspect that > a component holds any resources after the close/end method is > called. [Since I'm really only considerating the end of the > transaction only the close/end method is needed]. An example of > this would be a SAX Transformer with its startDocument/endDocument > methods, or a non-component example might be a java.io.InputStream > with its close method. > > 3. Finally there are interfaces which imply a transaction (i.e. that > the implementation may need to hold resources), but do not have any > methods delineating the transaction. The only example I can think > of for this one is not a component but the java.util.Iterator, which > has a next() method but no ending method. > > (end quote) > > ------------------ > TYPE 1: > > Components of type 1 are looked up directly: > > public class MyComposer implements Composable { > > private DocumentStore store = null; > > public void compose (ComponentManager manager) { > store = (DocumentStore) manager.lookup (DocumentStore.ROLE); > } > } > > Components of type 1 are never released. A client keeps a reference > to it for the duration of its lifetime. > > I believe we have concensus on this. > > ------------------ > TYPES 2 and 3: > > Components of type two and three are not looked up directly: > > public class MyComposer implements Composable { > > private TransformerManager transformerManager = null; > > public void compose (ComponentManager manager) { > transformerManager = (TransformerManager) manager.lookup > (TransformerManager.ROLE); > } > > public void doStuff () { > Transformer transformer = transformerManager.getTransformer (); > try { > } finally { > transformerManager.release (transformer); > // OR > transformer.release(); > // OR > transformer.endDocument(); > } > } > } > > As seen above, for components whose interface makes them thread-unsafe, > there *must* be a method, either in the associated manager, or in the > component itself, that, when called by the client, indicates that > the client is finished using the component. > > I believe we have concensus on this. > > (end summary) > > -------------------------------------------- > > Implications of Type 1 (my own thoughts, no concensus on this): > > + As there is no release() or equivalent, the implementation is > restricted to only holding resources during method invokations. > > + All implementations must be thread safe. as I said above I don't see why they must be thread safe. > Implications of types 2 and 3 (my own thoughts, no concensus on this): > > + As there is a release() or equivalent, the implementation may > hold resources during the entire transaction. > > + Implementations may be thread safe, but need not be. > > + For components of this type selected with a hint, we still > get the ugly two-step lookup we have with ComponentSelectors. > > + A XXXXManager interface and a corresponding implementation is > needed for each type 2 and 3 component == more code to write. I disagree in the case of type 2 components... As long as the component doesn't hold any resources outside the transaction (e.g. before the close/end method on the interface returns the component releases all filehandles and database connections), then there should be no problem with the ComponentManager returning the component directly from the lookup() method and the container/CM/ppol waiting for the VM's GC too inform them that the client has lost all references to the component. Robert. -- To unsubscribe, e-mail: <mailto:avalon-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:avalon-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/avalon-dev/200206.mbox/%3CPine.LNX.4.21.0206121032560.2806-100000@kiwi.mouat.net%3E
CC-MAIN-2018-05
refinedweb
839
55.34
Can't use GPIO 77 with MRAAjamesod Jul 28, 2015 9:27 AM I'm having a bit of trouble understanding what is going on here. I am using many of the Edison pins as GPIO, and I am initializing them using the MRAA numbers given in this table: mraa: Intel Edison I am using GPIO-42,40,81,83,80,82,43,78, and 79 without problems, but when I try to use 77 in the same way it does not work. According to that table, GPIO-77 = MRAA 39. However when I do this: mraa::Gpio* ADE_Reset = new mraa::Gpio(39); I get this error on the console: terminate called after throwing an instance of 'std::invalid_argument' what(): Invalid GPIO pin specified Aborted What am I missing here? 1. Re: Can't use GPIO 77 with MRAAIntel_Peter Jul 28, 2015 11:40 AM (in response to jamesod) Hello jamesod, First I would like to ask you which board are you using? If my assumption is not mistaken you are using a Mini Breakout Board, right? Anyway, I just tried it on both boards (Mini Breakout Board & Arduino Expansion Board), and I used MRAA pin 39 (GPIO77) on a simple script similar to a blink. It compiled without any issues on both boards, so I'm thinking this might be related to the MRAA version. I tested it with version 0.7.2. Which one are you using? Peter. 2. Re: Can't use GPIO 77 with MRAAjamesod Jul 28, 2015 12:28 PM (in response to Intel_Peter) Peter, I have tried on both the Mini Breakout Board, the Sparkfun Console board, and my own board. I am using Mraa version 0.7.3. When I flashed the latest Edison image, version 0.7.2 was included, but when I tried to run my program from Eclipse, it said that the version was outdated or something and asked if I wanted to sync versions. I did. Is this a problem? 3. Re: Can't use GPIO 77 with MRAAIntel_Peter Jul 28, 2015 2:35 PM (in response to jamesod) I doubt it. I just updated my MRAA version just to be sure, and still I got no issues. Can I see your code? I'd like to compile it on my Edison to see if there's a difference or if I get the same result. Peter. 4. Re: Can't use GPIO 77 with MRAAjamesod Jul 28, 2015 2:48 PM (in response to Intel_Peter) I've tried just using the blink example that is provided with the Eclipse IoT DevKit: #include "mraa.hpp" #include <iostream> #include <unistd.h> int main() { // select onboard LED pin based on the platform type // create a GPIO object from MRAA using it mraa_platform_t platform = mraa_get_platform_type(); mraa::Gpio* d_pin = NULL; switch (platform) { case MRAA_INTEL_GALILEO_GEN1: d_pin = new mraa::Gpio(3, true, true); break; case MRAA_INTEL_GALILEO_GEN2: d_pin = new mraa::Gpio(13, true, false); break; case MRAA_INTEL_EDISON_FAB_C: d_pin = new mraa::Gpio(39, true, false); break; default: std::cerr << "Unsupported platform, exiting" << std::endl; return MRAA_ERROR_INVALID; } Same result 5. Re: Can't use GPIO 77 with MRAAjamesod Jul 28, 2015 2:51 PM (in response to Intel_Peter) Could this be a hardware issue? If I try this from the console: root@Edison4:~# echo 77 > /sys/class/gpio/export I get: -sh: echo: write error: Device or resource busy 6. Re: Can't use GPIO 77 with MRAAIntel_Peter Jul 29, 2015 1:48 PM (in response to jamesod) I just tried compiling that code with the command "g++ test.cpp -lmraa" and the compiler threw no error. Why don't you try it this way? If you get the same result as me you can create an executable file with the command "g++ test.cpp -o test -lmraa". And that file can be executed with "./test". Peter. 7. Re: Can't use GPIO 77 with MRAAjamesod Jul 29, 2015 4:51 PM (in response to Intel_Peter) Just retried with a different edison to check hardware. Same problem. I will try compiling from the command line like you suggest when I get back to the office tommorow. 8. Re: Can't use GPIO 77 with MRAAjamesod Jul 30, 2015 7:57 AM (in response to Intel_Peter) Tried compiling on command line. It compiles without a problem, but when I run it I get the same Invalid GPIO pin specified error: root@edison:~/test# g++ MraaTest.cpp -o test -lmraa root@edison:~/test# ./test terminate called after throwing an instance of 'std::invalid_argument' what(): Invalid GPIO pin specified Aborted I'm out of ideas. This has happened on two different Edisons both with the latest image. Unfortunately I have some boards on the way that will require this pin to be used so it is too late now to change I/O. I need to get this to work. 9. Re: Can't use GPIO 77 with MRAAIntel_Peter Jul 30, 2015 10:07 AM (in response to jamesod) 10. Re: Can't use GPIO 77 with MRAAarfoll Jul 31, 2015 2:36 AM (in response to jamesod) It's probably not clear on the table but GPIO-77 is reserved for the SD card driver so is held by the kernel (see the table it says 'SD'). That means you have to unload the SDIO kernel module before you can use it. Unfortunately the edison kernel is not compiled that way by default so you haev to recompile the kernel and not compile in the sd modules. You can see in /sys/kernel/debug/gpio that gpio-77 is held by sd-cd in-hi. 11. Re: Can't use GPIO 77 with MRAAjamesod Jul 31, 2015 8:56 AM (in response to arfoll) Thanks for the explanation arfoll, but I'm still a bit confused here. Of the pins listed in /sys/kernel/debug/gpio that are also in the table here: mraa/edison.md at master · intel-iot-devkit/mraa · GitHub, there are only four that don't work. The pins that are commented out below do not work. //GP40 = new mraa::Gpio(82, true, false); GP42 = new mraa::Gpio(50, true, false); GP43 = new mraa::Gpio(38, true, false); //GP77 = new mraa::Gpio(39, true, false); GP78 = new mraa::Gpio(52, true, false); GP79 = new mraa::Gpio(53, true, false); GP80 = new mraa::Gpio(54, true, false); GP81 = new mraa::Gpio(55, true, false); GP82 = new mraa::Gpio(40, true, false); GP83 = new mraa::Gpio(41, true, false); //GP111 = new mraa::Gpio(9, true, false); GP128 = new mraa::Gpio(13, true, false); GP129 = new mraa::Gpio(25, true, false); GP130 = new mraa::Gpio(26, true, false); GP131 = new mraa::Gpio(35, true, false); //GP134 = new mraa::Gpio(44, true, false); Is it documented anywhere that these pins don't work with the edison kernel? If so I couldn't find it. I can't figure out why some of these work and some don't. Nothing is mentioned in the Edison Compute Module Hardware Guide, nothing indicates here: mraa/edison.md at master · intel-iot-devkit/mraa · GitHub that the pins won't work, and some of the reserved pins in /sys/kernel/debug/gpio are able to be used with MRAA. So how is someone to know unless they try them all? I used half a dozen pins from the mraa table and just assumed they were all ok. Now I'm stuck with a couple of bad options to fix this and still hit my deadline. I was planning on recompiling the kernel eventually, but that was something I didn't want to get into until I got my proof of concept prototype finished. Now I have to do it immediately, or hack up traces on my board.... Thanks for your help. I'm just growing frustrated with the Edison. 12. Re: Can't use GPIO 77 with MRAAk4mcv Sep 15, 2015 4:53 PM (in response to arfoll) Thanks arfoll for the answer, I would never have guessed that this was the issue. Like jamesod, I already built my PCB assuming the sd pins could be used for gpio, and so I need to get this working. While trying to recompile the kernel, in the menuconfig window (as described on the last page of the bsp guide), I selected exclude on the "MMC/SD/SDIO card support" line in the Device Drivers section. However, when I try to run bitbake, I get the following errors: arch/x86/built-in.o: In function `wifi_platform_data_fastirq': /media/kushal/scratch/edison-src/out/linux64/build/tmp/work/edison-poky-linux/linux-yocto/3.10.17-r0/linux/arch/x86/platform/intel-mid/device_libs/platform_wifi.c:76: undefined reference to `sdhci_pdata_set_quirks' drivers/built-in.o:(.data+0xbddc): undefined reference to `mmc_emergency_init' drivers/built-in.o:(.data+0xbde0): undefined reference to `mmc_emergency_write' I guess it makes sense that the sd and mmc functions are undefined because I removed the MMC/SD/SDIO support module. But, then how am I supposed to recompile the kernel without the SD modules like you suggested? 13. Re: Can't use GPIO 77 with MRAAarfoll Sep 16, 2015 11:31 AM (in response to k4mcv) Sadly looks like you're going to have to patch the kernel to get this working. Sorry I don't really know enough to understand why the requierment is there.
https://communities.intel.com/thread/77677
CC-MAIN-2018-34
refinedweb
1,560
68.81
I was helping a co-worker who needed to check if a field exists in their arcpy script. Since we were located at their computer, I thought I would just do a quick Google search and pull the code off this blog. Seemed logical since I the original purpose was exactly that—to serve as a handy, public place to store code snippets that I use & that others might find handy. Anyhow, my Google Search on “Node Dangles field Exists” came up with a 9.3 script to check if field index exists. And I also have a 10.0 version but did not come up with the field exists snippet. So here it is: def fieldExists(inFeatureClass, inFieldName): fieldList = arcpy.ListFields(inFeatureClass) for iField in fieldList: if iField.name.lower() == inFieldName.lower(): return True return False
http://milesgis.com/2013/09/23/arcpy-check-if-a-field-exists/
CC-MAIN-2017-51
refinedweb
137
75.5
A small, simple & self-contained implementation of the Glicko-2 rating algorithm in Scala that also helps the user with maintaining a leaderboard and allows for custom scoring rules. SetupSetup Version 1.5 is currently available for Scala 2.11 and 2.12. The last version to support Scala 2.10 was 1.3. To use this library in your SBT project, add the following to your build definition: resolvers += "jcenter" at "" libraryDependencies += "sglicko2" %% "sglicko2" % "1.5" UsageUsage Here's a simple, runnable example on how the library can be used: import sglicko2._, EitherOnePlayerWinsOrItsADraw._ object Example extends App { val glicko2 = new Glicko2[Symbol, EitherOnePlayerWinsOrItsADraw] val ratingPeriod = glicko2.newRatingPeriod.withGames( ('Abby, 'Becky, Player1Wins), ('Abby, 'Chas, Player1Wins), ('Abby, 'Dave, Player1Wins), ('Becky, 'Chas, Player2Wins), ('Becky, 'Dave, Draw), ('Chas, 'Dave, Player2Wins)) val leaderboard = glicko2.updatedLeaderboard(glicko2.newLeaderboard, ratingPeriod) leaderboard.rankedPlayers foreach println } You can find more example code in the test sources. The main sources should be very easy to understand, too, so don't hesitate to look at those if you have questions. Also, if you use this library, I'd love to hear from you. Thanks <3
https://index.scala-lang.org/asflierl/sglicko2/sglicko2/1.4?target=_2.11
CC-MAIN-2020-45
refinedweb
187
50.94
I bought a book called C++ without fear and it had a C++ compiler and a Rhide environment. I installed the compiler and Rhide from a CD in the book and it looked like it installed fine. I ran the first exercise in the book and did a compile with no errors. When I run the compiled program it shows the source code in Notepad instead of running the program. What could I have done wrong. I am running this compiler and Rhide on a Dell 8600 laptop. Please give me your opinion. Thank you, Paul Well you probably accidentally changed the extension association for .cpp to notepad somehow. Try right clicking a .cpp file, right clicking and open with select your program from a list and choose to remember it. ahoodin To keep the plot moving, that's why. ahoodin, You were right on with your explanation. This may be a very stupid question but what program do I associate to this .cpp file ? I want the file to actual run with either a screen print message asking me to input something or a printed screen output, not the complete scource output on my Notepad screen like: #include <iostream> using namespace std; int main() { double ctemp, ftemp; cout << "Input a celsius temp and press mENTER: "; cin >> ctemp; ftemp = (ctemp * 1.8) + 32; cout << "Fahrenheit tmp is: " << Ftemp; return 0; } Please help a real wondering person. Thank you, Paul Well the code will compile however 'Ftemp' should be 'ftemp'. I have been at a seminar and my laptop broke, sorry about the wait, here is an excerpt from the Microsoft Knowledge base: To change which program starts when you double-click a file Notes * You cannot use this method for a file that does not have a file name extension, or for a file that has an .exe, .com, or .bat extension. * If you change the program that Windows uses to open a certain kind of file, and that program was not designed for the type of data in that file, the files may not appear correctly in the program. To be safe, note the name of the program that Windows previously used to open the file type so that you can reverse your settings if it is necessary. To change which program starts when you double-click a file, follow these steps: 1. Open Windows Explorer by right-clicking the Start button, and then click Explore. 2. Click a folder that contains a file of the type that you want Windows to open in a program that you select. 3. Right-click the file and, depending on the programs installed on your computer, complete one of the following steps: * Click Open With to choose the program that you want. * Point to Open With, and then click Choose Program to choose the program that you want.. 5. Click to select the Always use the selected program to open this kind of file check box if it is not selected. 6. Click OK. here is the article that covers the topic of extensions: Forum Rules
http://forums.codeguru.com/showthread.php?510486-New-challenges-from-DevOps-development-cycle-for-your-infrastructure&goto=nextnewest
CC-MAIN-2016-44
refinedweb
514
77.77
Subject: Re: [boost] [config] Macro for null pointer From: Marshall Clow (mclow.lists_at_[hidden]) Date: 2012-12-07 16:54:28 On Dec 7, 2012, at 12:14 PM, Daniel Russel <drussel_at_[hidden]> wrote: > >> Well, I think some individuals here (I'd put myself in that camp) are >> opposed to injecting anything into the global namespace if it can be >> helped, and here it certainly can be helped. So if we add a nullptr >> emulation in Boost, it's going to have to be paired with a using macro, a >> different name (i.e., boost::nullptr_), or both. If no one will use this >> utility given the latter imperfections, I'm fine with just dropping the >> proposal altogether and everyone can go back to using NULL or C++11. > Just to put in my vote for not putting anything in the global namespace but providing a "BOOST_USING_NULLPTR" macro. We keep our nullptr emulation in our library namespace too and would gladly switch to a boost one. We've already got: BOOST_NO_CXX11_NULLPTR --
https://lists.boost.org/Archives/boost/2012/12/198992.php
CC-MAIN-2020-10
refinedweb
171
68.6
February 7, 2019 Your Favorite Five Towns Family Newspaper Distributed weekly in the Five Towns, Long Island, Queens & Brooklyn STATE OF THE UNION See page 7 Around the Community Rav Schustal Inspires at Motzei Shabbos Learning Program 40 President Trump: “We Must Choose Between Greatness or Gridlock” pg 94 Do You Really Think Tom Brady is Happy? 44 Dedication of Ari Block, z”l, Basketball Court pg Special Appreciation for Daveners of the Month 40 Parshas Terumah 3 Adar I 5778 Candle Lighting Time 5:03 pm Sponsored by See page 30 98 An O-Fish-al Chessed PAGE 9 pg 49 Passover Vacation Section Starts on page 107 2 FEBRUARY 7, 2019 | The Jewish Home Y YOU! A DETING N UTO GRE S S WARD I OR TH F OK ns. LO Still E W ng i ept c c a s ad d an res o ati v r e SPECIAL GUEST APPEARANCE ARI GOLDWAG YOFR ALUMNUS Celebrating a YOVEL FIFTY YEARS OF HARBOTZAS TORAH אות הכרת הטוב HAGAON HARAV YECHIEL YITZCHOK PERR SHLIT”A ROSH H AY E S H I VA GIVING THE ROSH YESHIVA THE GIFT OF A MORTGAGE-FREE CAMPUS שע"ט BENJAMIN BRAFMAN HO N OR A RY DINNER CHAIRMAN ת-תשכ"ט YESHIVA of FAR ROCKAWAY ⋅ איתן דרך ישיבה JONAH LOBELL DINNER CHAIRMAN ⋅ MOLDING TALMIDIM BUILDING GENERATIONS SUNDAY, FEBRUARY 10, 2019 THE SANDS ATLANTIC BEACH For reservations or journal ad placement: P. 718.327.7600 E. dinner@yofr.org W. KABBOLAS PONIM FOR THE ROSH HAYESHIVA 5:00 | COCKTAILS 5:30 | FORMAL DINNER 6:15 LIVE KUMZITZ & DANCING FEATURING ARI GOLDWAG AND AVIDON MOSCOVITZ The Jewish Home | FEBRUARY 7, 2019 3 FEBRUARY 7, 2019 | The Jewish Home בס”ד Far rockaway Lawrence A Continuing Torah Partnership Thi s Shabbos of Chizuk ! s o b b a h S February 15-16, 2019 שבת קודש פרשת תצוה • With the Roshei haYeshiva of Beth MedRash Govoha HaGaon HaRav HaGaon HaRav YeRucHim olsHin שליט”אYisRoel neuman שליט”א אכסניא וסעודת ליל שבת Mr. & Mrs. asher schonkopF מנחה וקבלת שבת bais Medrash heichel dovid harav Mordechai stern שליט”א 215 Central Avenue • 5:20 pm עונג שבת Co-hosted by Mr. & Mrs. asher schonkopF and Mrs. and Mrs. eli tendler 10 Wedgewood Lane • 8:45 pm עם מזמרים שחרית bais Medrash oF lawrence harav dovid FordshaM שליט”א 48 Lawrence Ave. • 8:40 am מוסף agudas achiM harav elisha horowitz שליט”א 200 Broadway סעודת שבת Mr. & Mrs. nachuM Futersak שיעור congregation kenesses yisroel harav eytan Feiner שליט”א 728 Empire Avenue • 3:55 pm מנחה וסעודה שלישית agudath israel oF long island harav Meir braunstein שליט”א 1121 Sage Street 5:00 pm אכסניא וסעודת ליל שבת Mr. & Mrs. yaakov herzka מנחה וקבלת שבת kehilas bais yisroel harav elisha sandler שליט”א 1215-1225 Caffrey Ave. • 5:15 pm עונג שבת Mr. & Mrs. yaakov herzka 1029 New McNeil Avenue • 8:45 pm עם מזמרים שחרית kollel avreichiM harav leibel rand שליט”א 12-04 Beach 12th St. • 8:00 am סעודת שבת Mr. & Mrs. yitzchok hoschander שיעור khal chesed v’eMes harav shMaryahu weinberg שליט”א 1037 Bay 24th Street • 4:00 pm מנחה וסעודה שלישית agudas yisroel oF bayswater harav MenacheM FeiFer שליט”א 2422 Bayswater Avenue 4:50 pm Joins u! For further information please call Rabbi Mordechai Herskowitz 732-367-1060 x4252 or Rabbi Eliyahu Shumulinskiy 732-267-1643 Dynagrafik 845-352-1266 4 The Jewish Home | FEBRUARY 7, 2019 5 6 FEBRUARY 7, 2019 | The Jewish Home Dear Readers, R Union you see that when the president’s opposition needs to get their point across and they’re not able to speak, they do it with their eyes, their mouths, their arms, their papers. They’ll smirk, they’ll roll their eyes, they’ll shake their heads, they’ll cross their arms, they’ll shuffle meaningless pieces of paper – just to show their displeasure. Gestures matter. Words matter, too. You see, when you are trying to get your point across, it helps to engender a feeling of appreciation from the other side. When President Trump broached the horrific topic of infanticide, he attempted to create an emotional sentiment from his detractors when he spoke of the beautiful image of a mother holding her infant after taking off from work after birth. As Democrats cooed at the stirring vision, he then pointed out that by killing a newborn, they are effectively murdering a young child. Lesson number 3: speaking to a person’s emotions may just help to sway their views (although in this case, sadly, I doubt it’s going to work). Last but not least, it was funny to watch members of Congress standing – or at least considering to stand during the speech. When President Trump spoke about women in the workforce, women in the audience should have applauded and lauded the president for the strides he’s been making for their gender. But the sea of Democratic women dressed in white was tepid at first. Should they stand or should they continue sitting? You could see the indecision on their faces and bodies as they hovered over their seats. But then, once one woman stood, the others followed because, you know, if she’s doing it, I should be doing it too. So lesson number 4: peer pressure is alive and well, even when you’re in Congress. Wishing you a wonderful week, Shoshana egardless of which side of the aisle one stands, the State of the Union address is one of the more pomp-and-circumstance events that we have in the United States. Absent a royal family, we aren’t privy to a king or queen (thank G-d) waving his or her hand from a balcony as the royal guard marches below. Instead, we’re treated to a president who is greeted by the people vying to shake his hand as he slow-walks down the carpet to the podium. The vice president sits behind him on his right; the speaker of the House sits on his left. The Supreme Court justices are there. So are the army’s top brass, members of the president’s Cabinet, and members of Congress. Special guests are invited and sit near the First Lady and the president’s family. And the whole country watches as the president addresses the nation. I learned a few things from watching Tuesday’s night speech – and that’s aside from what the president said. I asked a friend of mine on Wednesday morning if she watched the speech. Replying in the affirmative, she told me, “I loved it. But we all knew that he did all those things. Why do they always need to say it again?” In a way, I heard what she was saying. She follows the news and so she knows that the economy is doing well, the U.S. is asserting itself when it comes to China and Russia, and that Trump is gung-ho about building a wall. But not everyone follows the news. And even if you do, sometimes it’s good to toot your horn here and there. So here’s my lesson number 1: it’s OK to talk about your accomplishments. Both your detractors and supporters will benefit from a little reminder of what you’ve done. Lesson number 2: gestures matter. Yes, we all are well-versed in hilchos lashon hara and know that facial expressions matter just as much as what comes out of our mouths. Watching the State of the 9 AM Showers Sunny / Wind / Wind 53° 24° 35° 22° 10 11 12 Partly Cloudy AM Snow Showers Snow 38° 30° 39° 27° 37° 35° Berish Edelman Adina Goodman Mati Jacobovits Design & Production Gabe Solomon Distribution & Logistics P.O. BOX 266 Lawrence, NY 11559 Phone | 516-734-0858 Fax | 516-734-0857 Classified: Deadline Monday 5PM classifieds@fivetownsjewishhome.com text 443-929-4003 The Jewish Home is an independent weekly magazine. Opinions expressed by writers are not neces sarily | February 8 – February 14 8 Yitzy Halpern 13 Rain / Snow Showers 41° 27° 14 Mostly Cloudy 38° 31° Friday, February 8 Parshas Terumah Candle Lighting: 5:03 pm Shabbos Ends: 6:05 pm Rabbeinu Tam: 6:35 pm The Jewish Home | FEBRUARY 7, 2019 VACATION IS OVER Time to restock Time to restock your fridge and pantry with your family’s favorite foods! With the huge and affordably priced selection in our grocery, meat, produce, fish, take-out, bakery, appetizing, and sushi departments, you’ll find everything you need for a deliciously satisfying transition to your everyday routine. 7 8 FEBRUARY 7, 2019 | The Jewish Home Contents LETTERS TO THE EDITOR 8 COMMUNITY 8 Readers’ Poll Community Happenings 40 NEWS 84 Global 12 National 26 Odd-but-True Stories 36 State of the Union 94 ISRAEL Israel News 22 My Israel Home 93 PEOPLE 90 The Wandering Jew The Stormy History of Piracy on the Seas by Avi Heiligman 124 PARSHA Rabbi Wein 82 Making it a Habit of Hischadshus by Rav Moshe Weinberger 84 Parsha in Four by Eytan Kobre 86 HEALTH & FITNESS Be a Man by Dr. Deb Hirschhorn 103 Your Shabbos Guide to Healthy Eating: Side Dishes by Cindy Weinberger, MS RD CDN 104 Diet Baggage by Alice Harrosh 106 Friendships and the Early Years of Life by Hylton I Lightman, MD 108 FOOD & LEISURE The Aussie Gourmet: Beer Glazed Wings 112 LIFESTYLES You Think Tom Brady is Happy? 98 Dear Editor, I noticed a parallel in both Rav Moshe Weinberger’s article in this week’s issue and Rabbi Mordechai Yaffe’s article in this same issue. Rav Weinberger speaks about how G-d is in the details and how we need to remember that every piece of halacha is important. Rabbi Yaffe spoke about the importance of being a role model for our children and for being able to apply what we’ve learned to “real life.” Both of these articles talk about the importance of the small things in life. So many times, we may think that no one sees if we come a bit late to davening or raise our voices a bit louder to our family members or cut someone off when driving. But it’s all about the little things in life. Hashem is always watching, and if we keep that in mind, we’ll be mindful of the minutiae in our lives and act accordingly. L’havdil, our children are always watching too. We need to model good behavior and show them how they should be acting at every hour, every day, with every person, in business and at home. A growing person is one who is mindful of the little things in life. All the best, Adam H. Dear Editor, As I was reading your article last week and your letters to the editor, I felt like you were trying to paint a good picture of Robert Kraft and how they love Israel. “Robert Kraft and Sheldon Adelson are the biggest influencers in the kiruv community,” one reader wrote. If Robert Kraft is so Jewish and great, wear a yarmulke, wear tzitzis, keep Shabbos, eat kosher. This article made me think lowly of Robert Kraft. And I am sure a lot of people agree with me. Moshe Gladstone Far Rockaway, NY Dear Editor, Democrats: The Party of Infanticide In years long past, they were the party of slavery, Jim Crow, the Black Codes, the KKK and segregation; this was the sordid and disreputable history of the Democrat Party. In an apparent effort to eclipse their own abysmal history, they’ve now fervently embraced one of the most hideous practices ever in civilized history: infanticide. Democrats are now perfectly complacent with the murder of babies merely seconds prior to birth. If that doesn’t sufficiently perturb you, and you’re still obstinate in your euphemistically-named position “prochoice,” you’re an evil and execrable person who deserves no respect and are a pathetic excuse for a so-called “human” being. Anyone who’s still affiliated with the Democrat Party at the present juncture should be utterly ashamed of themselves. Rafi Metz Dear Editor, Two days ago from when I am write this letter, the Senate overContinued on page 10 Dating Dialogue, Moderated by Jennifer Mann, LCSW 100 108 Tribe Tech Review 126 Your Money 130 Confusing Messages by Rivki D. Rosenwald Esq., CLC, SDS 134 HUMOR Centerfold Charmingly Chopped 80 128 POLITICAL CROSSFIRE Notable Quotes 114 Democrats Strive to Forget Fragile Peace by David Ignatius 120 Schultz is Calling Democrats Out by Marc A. Thiessen 122 CLASSIFIEDS 131 Would you rather go into the past and meet your ancestors or go into the future and meet your great-great-grandchildren? 68 32 % Past % Future The Jewish Home | FEBRUARY 7, 2019 ing in all of ic r p y a d y r e Best ev g Island! n o L & s n e e Brooklyn, Qu AY! ( Free Parking VERY D E S T C U D O R NEW P TM much Over 150 Spaces! More for Less Prices Good Sunday, February 10th through Friday, February 15th, 2019 Dai Day Duck Sauce Malt-O-Meal Cereals $ 49 21 oz 3 $ 69 40 oz $ 49 11 oz/15 oz Taanug Stix Kedem Grape Juice Sofia Pearl Couscous Plain or Tri-Color Except Dots 99 ¢ 11 oz Olvita Extra Virgin Olive Oil 7 $ 99 1 Liter Walla Tuna or Egg Salad 1 $ 99 7 oz Green Giant Corn on the Cob 3 $ 99 12 Pieces Whole or Cut-Up Chicken 2 $ 19 LB Sweet Cantaloupe 3 2/$ Viennese Crunch 7 $ 49 LB 2 Regular or Light 3 $ 99 64 oz Best Bev Insulated 12 oz/16 oz Hot Cups with Lids 6 $ 99 50 Pack Gevina Greek Yogurts Assorted 79 ¢ 5 oz Gardein Meat-Free Assorted 3 $ 99 9 oz - 12 oz All Varieties 1 Gefen 13-17 or 18-25 Mini Cucumbers in Brine 2 $ 49 19 oz Plastico Mega Pack Bags 150 Count Tall Kitchen 90 Count Trash $ Avenue Flour TreeARipe All Purpose or Unbleached Grove Select $ Orange 99 5 LB Juice 1$ 3 Navel Pastrami 7 $ 49 LB 5 Green Squash 9 String Beans 1 $ 49 LB LB $ 99 LB 52 oz $ 99 14 oz/16 oz $ 39 LB Tuna Steaks 199 Dagim Zucchini Fritters, Mushroom Bites or Eggplant Cutlets White Turkey Roast 59¢ 1499 Sweet Chili Chicken Cutlets $ 1099 LB Keilim Mikveh on Premises | Pre-Shabbos Buffet-till 2 hours before Shabbos We reserve the right to limit quantities. No rain checks. Not responsible for typographical errors. 9 10 FEBRUARY 7, 2019 | The Jewish Home why did you ask Senator Al Franklin to step down without due process (not a fan of him, but still)? All you are is full of hate and selfishness. Even the Democratic leader in the senate, Senator Schumer, whom I don’t like, voted against BDS. Madam Senator, it’s time for you to start thinking for “we the people.” Stop advancing your own personal agenda, but instead start advancing your constituents’ agenda who elected you. Sincerely, Donny Simcha Guttman Dear Editor, Wow! Rabbi Yaffe’s article this week, “Applied Integrity,” should be required reading for anyone who has children. There is so much that our children learn in school but what they see at home and from their parents is so much of a greater influence. If only we remembered that little eyes are watching every word, action, tefillah, etc. we would be so much more careful. After all, don’t we want our children to grow up to be menschlech individuals we can be proud of? Sincerely, Adina Gerber Continued from page 8 whelmingly voted against the disgusting, hateful BDS movement. But 17 Democratic senators voted in favor of BDS. One of these shameful senators was our own senator, Senator Gillibrand. Whether you disagree politically with her or not, once upon a time she was a pro-Israel Democrat. She has gone so far left now that she voted against her own constituents, the Jewish people, which comprise around 1 million New Yorkers. Senator Gillibrand is running for president next year and just remember that she has voted in favor of the Iran Deal and BDS. This is outrageous and Jews that vote for her are shameful. We can’t support these crazy people who vote against their own constituents. If Senator Gillibrand reads this, this is my message for you: “Shame on you. What you continue to do is despicable. Starting with you lying to New York that you wouldn’t run for president to flip flopping on the issues.” We, as Jews, no matter Republican or Democrat, must support each other as it’s shameful to see people’s own agendas get in front of morality. We, as Jews, are Am Yisrael and it’s embarrassing to see a senator that represents the biggest Jewish population in America vote against us. Senator Gillibrand has said that even though she’s “against” the BDS movement, she needs to support it because we have to give them the right to speak. And if you are such a proponent of our systems all a sudden, then Dear Editor, My late relative, Rav Avraham Genechovsky zt”l, provided me with the following phenomenal drash relating to Purim. The Gemara (Taanis 29a) states, “Mishenichnas Adar marbim b’simcha, when Adar arrives, one should rejoice.” Rav Avraham related this to another Gemara (Beitzah 15b) that says, “One who plants a tree called ‘Adar’ is guaranteed that his property will endure. “Therefore, as Adar is symbolic of happiness, we may say that one who “plants” happiness into his heart will endure. I heard a topically related thought from Rabbi Kornfeld shlita, rosh kollel of Kollel Iyun Hadaf in Israel. The Gemara says, “A pumpkin is only shown in a dream to one who fears Heaven with all his might” (Berachos 56b). He explained the meaning of this passage in profound fashion. The characteristic of a pumpkin is that the more it grows, the deeper it sinks into the ground. So too, the true sign of one who fears G-d is that as he grows and becomes greater, he sinks lower into the ground, ensconced in humility. In summation, planting happiness within and falling in humility as you grow could be the right plan to prosper. Steven Genack The Jewish Home | FEBRUARY 7, 2019 11 12 FEBRUARY 7, 2019 | The Jewish Home The Week In News S E R V I C I N G T H E F I V E TOW N S the country’s president must be a Christian, the prime minister’s slot is reserved for a Sunni Muslim, and only Shi’ites can be appointed parliamentary speaker. A New Govt in Lebanon Australia Suffers from Extreme Heat Lebanon managed to establish a governing coalition late last week, breaking an 8-month deadlock that added to the country’s economic woes. Sa’ad Hariri will stay on as prime minister and will lead the 30-minister government. Following the first government meeting, Hariri said that he would focus on rehabilitating Lebanon’s dismal economic outlook by accessing billions in foreign aid to pay back the national debt. The current debt is twice as large as the economy, which has stagnated over recent years with only 2% average annual growth. “There are difficult decisions in all areas that we must take,” Hariri noted. “We are facing economic, financial, social and administrative challenges.” He added, “It has been a difficult political period, especially after the elections, and we must turn the page and start working.” Lebanon’s political system had been deadlocked ever since the most previous elections in May regarding the inclusion of Sunni politicians allied with the Hezbollah terror militia. Hariri, a fierce Hezbollah opponent, had initially refused to give Hezbollah-allied lawmakers a cabinet position, only to back down last week. Under the new agreement, MP Hasan Mrad will be appointed Minister of State in what analysts see as a clear win for the Iran-controlled proxy. Despite being designated by the United States and Israel as a terror group, Hezbollah now control’s Lebanon’s fourth-largest budget. Political gridlock has defined Lebanon for years stemming from its large Sunni, Shiite, Maronite Christian, and Druze minorities. By law, While many of us are shivering in our boots, folks Down Under are sweltering in temperatures that are soaring. Farmers have been forced to feed livestock by hand as crops haven’t been growing. They worry about planting for the next season after all the heat and winds have dried out their fields. Week after week, temperatures in AustraliaºC (116 F). On Friday, Australia’s Bureau of Meteorology announced it had been the country’s hottest January on record, describing the weather as “unprecedented.” In temperatures above 40ºC (104ºF) the human body begins to experience heat exhaustion. Once the temperature exceeds 41ºC (105ºF), the body starts to shut down. Health warnings have been issued throughout Australia advising people to stay indoors during the hottest part of the day, minimize physical activity and keep hydrated. A viral video circulating in the country in January showed two farmers in a river holding up two huge dead fish. Mounds of fish have been dying en masse due to the extreme temperatures and drought conditions, as fish suffocate for lack of oxygen. Fish are not the only animals affected. Dozens of wild horses were found dead surrounding a dried up AC K RE 2 Decades UC S B R O O K LY N C L A K E W O O D O R D of S A TR The Jewish Home | FEBRUARY 7, 2019 M O N S E Y Rebetzin Bulk a will s"xc be in Eretz Yisrael CES FEB 10 - 14 Please call 052.539.854 0o 7 18.769.816 r 0 to schedule an interview Explore the opportunities Discover your potential THE NEW SEMINARY. DO IT QUICK. DO IT RIGHT. U N D E R G R A D U AT E P R O G R A M S Inspiration & Growth SEMINARY PROGRAM BA BS BACHELOR OF ARTS: SOCIAL SCIENCES BACHELOR OF SCIENCE: BUSINESS, NATURAL SCIENCES NEW!MINOR IN BUSINESS GRADUATE BSN BACHELOR OF SCIENCE: NURSING PROGRAMS DC DESIGN / 732.901.4784 N U R S E BS/MS OT OCCUPATIONAL THERAPY LIU Newly Expanded Financial Aid Available!! MSW MASTERS OF SOCIALWORK LIU P R A C T I T I O N E R MSPMH-NP P R O G R A M S FNP GNP PSYCHIATRIC MENTAL HEALTH NURSE PRACTITIONER FAMILY NURSE PRACTITIONER GERIATRIC NURSE PRACTITIONER ADELPHI UNIVERSITY LIU SCHOOL OF NURSING ADELPHI UNIVERSITY AN EXCLUSIVE PROGRAM OF A P P L I C AT I O N P R O C E S S O P E N F O R SPRING & FALL 2019 FINANCIAL AID AND ACADEMIC SCHOLARSHIPS AVAILABLE N E W YO R K : 1492 EAST 12TH STREET, BROOKLYN, NY 11230 7 18.769.8160 f: 7 18.769.8640 THE New Seminary asjv rbhnx N E W J E R S E Y: 139 OCEAN AVENUE, LAKEWOOD, NJ 08701 732.366.3500 f: 732.367.8640 Rebbetzin Sora F. Bulka MENAHELES Rabbi Yeshaya Levy MENAHEL email: INFO@THENEWSEMINARY.ORG online: W W W.T H E N E WS E M I N A RY.O R G 13 14 FEBRUARY 7, 2019 | The Jewish Home watering hole last month. At least 2,000 flying foxes were found dead due to heat stress. Authorities and infrastructure have been struggling to keep up with the extreme weather’s disastrous side effects. Dozens of bushfires broke out across the southern state of Tasmania, destroying homes and wilderness as hundreds of firefighters sought to get the blazes under control. Facing pressure from Australians desperate to escape the heat, the country’s power grid even began to buckle. Hundreds of thousands of homes were sporadically left without power in Victoria and South Australia amid surging demand as residents turned up air conditioners and fans. Tasmanian Premier Will Hodgman on Wednesday warned that conditions would “worsen.” Syria Guilty in Death of War Correspondent A U.S. judge has found Syria responsible for the death of journalist Marie Colvin and has ordered the Assad regime to pay $302 million in damages to her family. Syria was ordered to pay $300 million in punitive damages while the other $2.5 million was for pain and suffering. Colvin, 56, was killed in 2012 during an artillery barrage in the Syrian city of Homs while on an assignment by England’s Sunday Times. In a 36-page ruling, D.C. District Court Judge Amy Berman Jackson said that the Assad regime had specifically targeted the building Colvin had taken shelter in for the purpose of silencing the press from reporting on the war-torn country. “She was specifically targeted because of her profession, for the purpose of silencing those reporting on the growing opposition movement in the country,” wrote Judge Jackson. The justice also noted testimony from a Syrian defector with knowledge of the artillery strike who alleged that “officials at the highest level of the Syrian government carefully planned and executed the artillery assault on the Baba Amr Media Center for the specific purpose of killing the journalists inside.” Jackson added that “the targeted murder of an American citizen, whose courageous work was not only important, but vital to our understanding of war zones and of wars generally, is outrageous.” The ruling was welcomed by Colvin’s family, which had first filed the wrongful death lawsuit in 2016. “It’s been almost seven years since my sister was killed by the Assad regime, and not a day goes by when I don’t think of her,” said her sister Cathleen. “It is my greatest hope that the court’s ruling will lead to other criminal prosecutions and serve as a deterrent against future attacks on the press and on civilians.” While Syria never responded to the lawsuit, President Basher Assad argued that his military played no role in Colvin’s death during a 2016 interview with NBC. “It’s a war, and she came illegally to Syria; she worked with the terrorists and because she came illegally, she’s responsible [for] everything that befell her,” Assad said. Colvin had been a celebrated journalist known for the eye patch she wore following the loss of her eye from shrapnel in 2001 while reporting from Sri Lanka. She worked for the Sunday Times from 1985 until her death and was known for venturing into high risk conflict areas for stories. Colvin first rose to fame in 1986 when she became the first person to interview Libyan strongman Muammar Gaddafi after the U.S. bombed his country in an assault known as Operation Dorado Canyon. A feature film chronicling her life was later released in 2016. Paris’ Deadliest Fire in a Decade Was Arson On Tuesday, Paris was struck with its deadliest fire in over a decade, in which ten people were killed. Residents clamoring to escape the flames fled to the roof and across balconies as the fire engulfed their nine-story MOMS YOU’RE GONNA LOVE IT! LAST CHANCE! BUSINESSES YOU’RE GONNA WANT IT! 888.666.1812 PELLY@THEBUSINESSMAGNET.COM | Ext.106 T H E B U S I N E SS M AG N E T The Jewish Home | FEBRUARY 7, 2019 15 16 FEBRUARY 7, 2019 | The Jewish Home We’re proud to announce that Lisa Gordon, formerly of Douglas Elliman, and Avigaiel Bernstein, formerly of Pugatch Realty, have joined our cedarhurst office Lisa Gordon Avigaiel Bernstein Interview us to see how our experience, expertise and unparalleled service can benefit you 516-374-0242 info@sharonabeckrealty.com 119 Spruce Street,Cedarhurst NY 11516 Are dental implants and teeth possible “same day", within an hour? Absolutely! Dr stern offers responsible, quick, easy, and usually painless implants with lasers Special discounts, this week only, in celebration of our new facility with the most advanced lasers and technology! USA: 1607 55st Brooklyn - N.Y. Tel: 718-305-5480 Jerusalem: Ussishkin 57, Tel: 02-6258258 Website: implantsandlasers-r-us.com E-mail: GMSDENTAL@GMAIL.COM Dr. Gedaliah Mordechai Stern apartment building. A 40-year-old female resident of the building, said to have a history of psychiatric problems, was arrested on the street in the hours after the 1 a.m. blaze, as French police opened a criminal investigation into voluntary arson resulting in death. Police say she was drunk. French President Emmanuel Macron said on Twitter: “France wakes up with emotion after the fire in rue Erlanger in Paris last night.” Multiple neighbors said they heard the suspect and her neighbor, an off-duty firefighter, arguing over the woman’s music before the fire broke out, and then heard the woman cry out: “So you’re a firefighter? Here’s a fire.” Police had responded to the dispute earlier in the evening. The firefighter and his friend told officers they were going to leave the building because she was dangerous. Tuesday’s fire was the deadliest fire in Paris since the April 2005 hotel fire near the capital’s famed Opera that killed 24 people. Interior Minister Christophe Castaner spoke to reporters at the scene on Tuesday morning, as plumes of smoke speckled the sky. “I want to salute the huge mobilization of the Paris firefighters,” he said. “More than 250 people arrived immediately and, throughout the night, saved over 50 people in truly exceptional conditions.” Over 30 people were being treated for “relatively” light injuries, he said. Among the injured were at least eight firefighters. The building is on rue Erlanger in the 16th arrondissement, one of the calmest and priciest districts of Paris. It is close to the popular Bois de Boulogne park and about a kilometer. It was extinguished by mid-morning. A major Canadian cryptocurrency exchange is in the spotlight following the sudden death of its founder, which has left customers unable to access $190 million in funds. Gerald Cotten, the 30-year-old founder of QuadrigaCX, died in India on December 9, 2018, due to complications from Crohn’s disease, according to a sworn affidavit by his wife, Jennifer Robertson. At the time of his death, Cotten was the only person with the password to access customer funds. Robertson says that she has received online threats as a result of the bizarre situation. was attached to Robertson’s affidavit. The Globe and Mail reports that Cotten signed a will on November 27 that included a $100,000 provision for his two pet chihuahuas. The diligent preparation of the will and the apparent lack of contingency planning around the customer funds has left many people scratching their heads. In the turmoil following Cotten’s death, QuadrigaCX has applied for creditor protection in the Nova Scotia Supreme Court. A preliminary hearing on the application is scheduled to take place this week. “For the past weeks, we have worked extensively to address our Cryptocurrency Chaos The Jewish Home | FEBRUARY 7, 2019 y u B ‘N e k a B i- SH IN E A BA KE SA LE TO BE N EF IT Bake something delicious. Buy something wonderful. BRING SMILES TO I-SHINE’S CHILDREN. i-Shine is Chai Lifeline’s afterschool program for children living with illness or loss in their homes. WEDNESDAY FEBRUARY 20 THURSDAY FEBRUARY 21 4:00 PM - 9:00 PM 10:00 AM - 7:00 PM AT THE HOME OF BONNIE & HESHIE SCHERTZ 88 Margaret Avenue, Lawrence (Baked goods can be dropped off & purchased at any time during these hours.) RAFFLE WITH A SELECTION OF GREAT PRIZE PACKAGES! For more information contact Andy Lauber, LMSW (917) 763-1109 / email alauber@chailifeline.org Annette Kaufman, Stacey Zrihen, Sheri Hammer, Coordinators Andy Lauber, Director 17 18 FEBRUARY 7, 2019 | The Jewish Home said in a statement posted on its website on January 31. “Unfortunately, these efforts have not been successful.” Citing court filings, the Chronicle Herald notes that “cold wallets” harness technologies such as USB drives and electronic devices that are not connected to the Internet. . Cryptocurrency experts have ex- pressed their surprise at the unusual situation. Taylor Monahan, CEO of cryptocurrency specialist MyCrypto, says that the deadlock highlights the need for a “multi-signature wallet” where a number of people have access to sensitive data. Cotten was in India to open an orphanage when he died. Tonga InternetFree for 2 Weeks After almost two full weeks of being plunged into proverbial darkness, residents of the island-nation Tonga regained internet connectivity as of February 2. Tonga, located roughly 1,000 miles northeast of New Zealand, is connected to high-speed internet in Fiji via an underwater fiber-optic cable. When the cable was severed on January 20, the population of about 100,000 residents couldn’t access the internet, make international calls, or even process credit card payments. For those twelve days, long lines formed outside the headquarters of Tonga Communications Corporation, the national internet service provider, for access to internet “rations.” For small business owners, the loss of internet was quite harmful to their livelihood. Tony Matthias, the owner of a tour company and guesthouse, said he had been waiting in the line twice a day, often for several hours, because quick response time to potential customers is how he keeps his business running. “I always respond to messages as soon as I see them – that’s been my policy,” he reported. , which has only been wired for high-speed internet since 2013, never imagined that such a thing could happen. Piveni Piukala, a di- rector of Tonga Cable Limited, said the company believed that a large ship had cut the cable in multiple places by dragging an anchor along the seabed, according to The Associated Press. “We don’t need a rocket scientist to tell us we need a better plan,” Piukala said, adding, “The cost of a backup is huge, and for a country like Tonga, we don’t have the luxury of money to put aside for a disaster like this.” FB Removes 800 Fake Iranian Accounts Facebook recently announced its removal of almost 800 “inauthentic” Iranian accounts that were part of a large-scale manipulation campaign operating in over 20 countries, including Israel. The Jewish Home | FEBRUARY 7, 2019 19 20 FEBRUARY 7, 2019 | The Jewish Home The various pages, groups, and accounts were part of a campaign to promote Iranian interests in numerous countries by creating fake identities of residents of those nations, according to a statement by Nathaniel Gleicher, head of cybersecurity policy at Facebook..” According to Facebook, the fake accounts first began looking into these kinds of activities after the revelations of Russian influence campaigns during the 2016 U.S. election, which were aimed at sowing discord. Iran is considered one of the world leaders in information warfare and has been caught multiple times managing fake news mills whose goal is to influence people all over the world. In November, Reuters released a bombshell report with the Israeli cybersecurity company ClearSky detailing an Islamic Republic’s operation that operated 98 websites in 25 countries to shape public sentiment in a way that favored Iranian interests. All in all, Iran’s bundle of websites spread consistent propaganda across the Arab world and the West mocking Donald Trump, the United States, and trumpeting anti-Israel material. The expose caused Facebook to remove hundreds of pages it identified as fronts for Iranian information operations. U.S.-China Tensions Escalate The already-tenuous relations between the United States and China deteriorated further after the Justice Department filed criminal charges against Chinese tech giant Huawei. The charges come as Washington and Beijing attempt to find an end to the trade war stemming from tariffs imposed by President Donald Trump. The U.S. had filed charges against Huawei for what it said was its rampant theft of intellectual property. The indictment detailed the extensive methods Huawei utilized to target the U.S. telecommunications firm T-Mobile, including paying employees to steal sensitive information from their competitors. Key to Huawei’s efforts was the technology underlying a robot T-Mobile developed to test cellphones known as “Tappy.” Huawei allegedly schooled its employees who were permitted to enter T-Mobile’s lab containing the robot to copy its measurements and tasked a Huawei scientist with stealing the device for engineers to examine. Huawei’s Chief Financial Officer Meng Wanzhou was also charged with assisting other corporations to skirt the sanctions on Iran. Meng had been arrested in Canada in December after the U.S. filed an extradition request, and her continuous confinement has ratcheted up tensions between Canada and China. “Today we are announcing that we are bringing criminal charges against telecommunications giant Huawei and its associates for nearly two dozen alleged crimes,” Acting U.S. Attorney General Matthew Whitaker said last Monday. “China must hold its citizens and Chinese companies accountable for complying with the law.” FBI Director Christopher Wray added in a press conference that Huawei “relied on dishonest business practices that contradict the economic principles that have allowed American companies and the United States to thrive. “The prosperity that drives our economic security is inherently linked to our national security,” Wray continued. “And the immense influence that the Chinese government holds over Chinese corporations like Huawei represents a threat to both.” The Jewish Home | FEBRUARY 7, 2019 China strongly protested the charges, which it alleged was an effort by the U.S. to use “its state power to smear and crack down on targeted Chinese companies in an attempt to kill their normal and legal business operations.” Foreign Ministry spokesman Geng Shuang called on the U.S. “to stop its unreasonable crackdown on Chinese companies, including Huawei.” “The U.S. should immediately withdraw its arrest warrant on Ms. Meng and refrain from making a formal extradition request to avoid walking farther down a wrong path,” he said. The indictments come amid an effort by the United States to crack down on China’s rampant efforts to steal or copy sensitive U.S. technology. Observers say that Beijing’s intellectual property theft causes billions in damages to U.S. corporations due to lost income and allows Chinese corporations to undercut their U.S. counterparts. 26, 2018 in the Bethel neighborhood-and-church house has ended,” said Bethel in a statement. “The political agreement that was concluded on Tuesday offers the Armenian family Tamrazyan a safe future in the Netherlands.” Over 1,000 people took part in 96-day service in an effort to stop the deportation of the family, which saw 450 pastors arrive from around the country to participate in the effort. “We are extremely grateful for a safe future for hundreds of refugee families in the Netherlands,” said Theo Hettema, who serves as the chairman of the General Council of Protestant Ministers in the Netherlands. “For months we have held up hope, and now that hope is taking shape.” El Salvador’s New Prez Makes History 21 OHEL CONTINUES TO LAUNCH NEW SERVICES THAT UPLIFT OUR COMMUNITY AUTISM AWAKE: OVERNIGHT PROGRAM GERIATRIC SERVICES 96-Day Church Service Saves Armenian Family NEW SUPPORT GROUPS SIBSHOPS A Dutch church declared victory after its marathon 96-day church service saved a family from being deported. The Dutch government had sought to deport the Tamrazyans, an Armenian family of five that has lived in the country for nine years. Exploiting a law that forbids law enforcement from entering houses of worship during services, the Bethel Church in The Hague held ongoing services starting on October 26. After 2,237 hours of round-theclock rites, the Dutch government gave in and announced that the Tamrazyans will be granted permanent residency along with hundreds of other families under a new amnesty measure. The Bethel Church hailed the government’s decision to grant amnesty to the family. “On January 30, 2019, the continuous church service that has been held since October Nayib Bukele, a former mayor, coasted to a landslide victory in El Salvador’s presidential elections on Sunday. Winning 54% of the vote, Bukele won more than the other two candidates combined. Runner-up Carlos Callejas came in a distant second with 32%, followed by former foreign minister Hugo Martinez of the Farabundo Marti National Liberation Front. In surpassing the 50% mark, Bukele avoided potentially heading to a runoff in March. “We have full certainty that we have won the presidency, and we have won in the first round,” declared Bukele before a raucous crowd. Bukele, a social media-savvy politician who frequently snaps selfies with his supporters, ran on a strong anti-corruption platform. Among his campaign promises were pledges to stamp out crime, eliminate bribery, and crack down on the gang warfare ANXIETY SERVICES TRAUMA SUPPORT FOR MORE INFORMATION ON THE NEW PROGRAMS AT OHEL OR ON OHEL’S FOSTER CARE/DOMESTIC VIOLENCE/ RESIDENTIAL PROGRAMS, COUNSELING SERVICES AND MORE, CALL ACCESS AT 1-800-603-OHEL Save the Date: OXC Classic at Camp Kaylie Camp Kaylie begins on Wednesday, June 26, 2019 Sunday, May 19, 2019 Annual Rosemil Health Care Golf & Tennis Classic Tournament OHEL’s 50th GALA Monday, June 17, 2019 at Engineers Country Club in Roslyn Harbor, NY at the NY Marriott Marquis Sunday, November 24, 2019 1268 EAST 14TH STREET, BROOKLYN, NY 11230 718.972.9338 • 22 FEBRUARY 7, 2019 | The Jewish Home Taco 'bout A GOOD DEAL! BUSINESS LUNCH AT THE COFFEE BAR & RESTAURANT 345 CENTRAL AVE, LAWRENCE, NY 11559 @COFFEEBAR5TOWNS that has made El Salvador one of the most violent nations in the world. Bukele, 37, burst onto the local political scene in 2015 when he was elected mayor of San Salvador. His frequent criticism of senior members of his then-FMLN faction led to his expulsion, and he joined the small Grant Alliance for National Unity party. Bukele’s win marks the first time a president has been elected who does not belong to the leftist Farabundo Marti National Liberation Front (FMLN) or the Nationalist Republican Alliance (ARENA), something he noted in his victory speech. “This day is historic for our country. This day El Salvador destroyed the two-party system,” Bukele told supporters. A Gantz-Lapid Merger? @COFFEEBAR5TOWNS A senior lawmaker in Yair Lapid’s Yesh Atid faction has confirmed that talks are underway to possibly run together with the Hosen L’Yisrael party headed by former IDF Chief of Staff Benny Gantz. “Conversations are being held between Yair Lapid and Benny Gantz, and a decision will come in the next two weeks,” confirmed MK Ofer Shelah. The lawmaker cautioned, however, that any such union would have to be headed by Lapid and not by Gantz. Shelah added that “the most established government alternative in Israel is Yesh Atid led by Yair Lapid, and it needs to lead those who want to change the government.” The revelation comes as Gantz has surged in the polls following his maiden political speech last Wednesday, with surveys showing that the general is siphoning support off Yesh Atid. Gantz, who had commanded the IDF from 2010 to 2015, had broken his long media silence and announced in a widely-viewed speech that he intended to replace Prime Minister Netanyahu in the coming April elections. “Instead of serving the people, the government looms over the people and finds the people to be a bore,” alleged Gantz. “It does not see the working man and the working woman. It does not see families moaning under the cost of living and the young people who cannot buy an apartment.” Initial surveys after the speech found that Gantz’s Hosen L’Yisrael party surged from the 12 Knesset seats it has been averaging to anywhere between 21 and 24. In another poll by the Hadashot television channel, Gantz and Netanyahu were tied in the polls as most suitable to be Israel’s prime minister, the first time someone has tied Netanyahu in more than a decade. Meanwhile, Yesh Atid dropped from its average of 15 seats to the single digits. However, numerous surveys found that a Yesh Atid-Hosen L’Yisrael union would become Israel’s largest party with 36 seats, five more than the 31 the ruling Likud party is expected to receive. Previous talks between Lapid and Gantz had stalled due to Lapid’s demand that he remain number one and be the bloc’s candidate for prime minister, something to which the general had previously refused. Unit 669 Trains Citizens Around the World A new report by the Times of Israel details a new effort by members of Israel’s 669 unit to train people around the world in emergency first aid. 669 is considered one of the IDF’s most elite units. A Search and Rescue team attached to the air force, 669 is tasked with rescuing downed pilots from behind enemy lines. Recruits are only accepted following a punishing week-long tryout, and its The Jewish Home | FEBRUARY 7, 2019 23 24 FEBRUARY 7, 2019 | The Jewish Home 24-month rigorous training regimen is the military’s longest after its pilot’s course. As the unit deals mainly with rescuing wounded soldiers, every operative is a trained medic, making 669 the only combat unit in the IDF where members must pass a medics course. Other than first aid, 669 personnel are also trained in advanced counterterrorism tactics, as well as hostage rescue, hand-to-hand combat, and urban warfare. Now, a new initiative made up of 669 alumni aims to train civilians around the world to perform emergency first aid wherever it may be needed. Known as the 669 Alumni Association, it runs programs teaching rudimentary medical skills that they learned on the battlefield, such as how to treat heart attacks and tying a tourniquet. The initiative also hosts a medical conference every year and distributes special first aid kits modeled after the ones 669 troops use during their military service. The effort was founded by Bar Reuven, a Tel Aviv native who spent five years in 669. He told the Times of Israel that he was struck by the idea after witnessing a woman collapse on a subway platform in Brooklyn, New York. “After the medics came and took her to the hospital, I got back on the train, and people clapped for me,” Reuven recalled. “Like I was a hero. But all I did was respond. And I realized how many people are afraid to help because they don’t know what to do. “My teammates and I have years of experience saving lives,” added Reuven. “Once we leave the military, that experience is not fully utilized. How can we pay it forward and teach other people how to save lives too?” The 669 Alumni Association has already trained hundreds in the basics of first aid. With its board comprised of influential figures such as former Israeli Defense Minister Ephraim Sneh, members of the association hope that they can utilize their clout to expand their life-saving activities. “The 669 organization is just at the beginning of its life and there is a huge horizon for it to expand into,” said David Ben Eli, a 669 alum and current New York City firefighter. “Just look at the dramatic improvement in cardiac survival rates in Washington, for example, after the state mandated the teaching of CPR in high schools.” 82 Ethiopian Immigrants Land in Israel “Fal. Although a cabinet decision in 2015 promised to bring the entire Falashmura community to Israel over a five-year period, the government has not budgeted the approximately NIS 200 million ($55 million) per year to absorb the new immigrants. Just one Ethiopian Jewish family was allowed to immigrate to Israel in 2018, Israel Bible Quiz participant Sintayehu Shafrao and his family. At least 1,000 immigrants are expected to come to Israel in 2019, but the fate of the rest of the community is uncertain. On Monday evening, the group of new immigrants was welcomed at Ben Gurion Airport by Immigration and Aliyah Minister Yoav Gallant and Jewish Agency Chairman Isaac Herzog. Many of them were reunited with their relatives living in Israel, some of whom had been waiting for their loved ones for many years. “This is an exciting moment, and the whole of Israel is embracing you. The land of Israel is embracing you,” Herzog said on Monday, calling on the government to bring the rest of the Ethiopian Jews to Israel. The Jewish Home | FEBRUARY 7, 2019 25 Alisa Bodner, spokeswoman for an Ethiopian-Jewish activist group, criticized the decision to only bring 1,000 of the thousands waiting for permission to come to Israel, calling it “a cruel game that forces parents to make an inhuman decision between their kids in Israel and their kids in Ethiopia. “We are far from content with the partial and superficial fulfillment of the decision adopted by [Prime Minister] Benjamin Netanyahu’s government in 2015,” she said. “While the Israeli government begs other communities in the world to make aliyah, it is ignoring its decisions regarding Ethiopian Jewry and thus continues the discrimination against members of the Ethiopian community,” she added. Because the Interior Ministry does not consider the Falashmura to be Jewish, they cannot immigrate under the Law of Return and therefore must get special permission from the government to move to Israel. About 135,000 Ethiopian Jews currently live in Israel. Some 22,000 of them were airlifted to Israel during Operation Moses in 1984 and Operation Solomon in 1991. IDF to Demolish Home of Ofrah Terrorists The IDF signed a demolition order last week for the destruction of the home of Salah and Asam Barghouti, who are responsible for the recent deadly terror attacks in Ofrah and Givat Assaf. An appeal by the family to spare their home was rejected. The IDF Central Command’s signature caps off the legal battle to destroy the Barghouti brothers’ home. The IDF had issued the demolition order on January 20 but its implementation had stalled due to numerous legal appeals by the Barghouti family. Israel commonly demolishes the homes of terrorists in order to deter future attacks. Salah and Asam, who grew up in a prominent family affiliated with Hamas, had shot seven Israelis on December 9 after the brothers opened fire at a hitchhiking post near Ofrah, about a half hour drive from Jerusalem. Among those shot was Bet El native Shira Ish-Ran, who was 30 weeks pregnant with her first child. Doctors at Jerusalem’s Shaarei Tzedek Hospital performed an emergency caesarean section in an attempt to save the baby but the newborn passed away three days later. Salah was killed three days later during a shootout with Israeli special forces in his hometown of Kobar. The next day, Asem shot and killed Sgt. Yosef Cohen and Staff Sgt. Yovel Mor Yosef while they guarded a bus stop at Givat Assaf, only a mile away from the attack in Ofrah. Another soldier was seriously wounded in the attack and remains in critical condition. Asam successfully evaded a massive manhunt until he was nabbed in a joint operation by Israel’s elite Yamam SWAT team and the Shin Bet in the village of Abu Shukheidim north of Ramallah. Construction Begins on New Border Wall Israel began constructing a massive 20-foot wall that will wall off the Gaza Strip from the Jewish State last week. The hulking 40-mile steel edifice will be built by the Defense Ministry 26 FEBRUARY 7, 2019 | The Jewish Home and aims to prevent Gaza-based terrorists from infiltrating into Israel. Costing an estimated NIS 3 billion, the fence will be erected on top of the underground 100-foot barrier Israel has been building to stop Hamas’ terror tunnels from crossing its borders. The barrier weighs six tons and is slated to be finished in 2019. “On Thursday, we began work on the final component of the Gaza Strip border barrier project. The obstacle is unique and specially designed to protect against the threats from the Strip and to give a superior solution to preventing infiltration into Israeli territory,” said Brig. Gen. (res.) Eran Ofir, who is tasked with overseeing the project. The IDF says that the fence will be a significant upgrade from its predecessor, which is dilapidated and falling into disrepair. In recent months, dozens of Gazans have infiltrated into Israel through the current fence, including three Palestinians who managed to elude an IDF manhunt for over 12 hours in 2017. In contrast, the new fence will sport state-of-the-art technology, including pressure sensors and video cameras and will be topped by remote-controlled machine guns. The Defense Ministry said in a statement that the fence “is similar to the one on the Egyptian border, but it has significant improvements and includes innovative security elements.” Prime Minister Netanyahu invoked the fence during his weekly cabinet meeting on Sunday. “At the end of last week, we began the construction of the barrier on the Gaza border,” said Netanyahu. “The barrier will prevent the infiltration of terrorists from Gaza into our territory. Those in Gaza have to understand that if they do not keep quiet, we will not hesitate to act.” The imminent construction comes amid Israel’s success with a similar barrier that was constructed in recent years on its border with Egypt. Built to stem both the flood of illegal immigrants from Sudan and Eritrea and to prevent Islamic terrorists from infiltrating into Israel, the barrier caused infiltrations to drop from 14,669 infiltrations in 2013 to only 14 in 2016. INF Treaty with Russia Falls Apart Russian President Vladimir Putin said that his country would develop new weapons after the United States suspended compliance of the Range Nuclear Forces Treaty (INF). Inked at the height of the Cold War back in 1987, the accord bans the U.S. and Russia from using medium- and short-range missiles. “Our American partners announced that they are suspending their participation in the treaty, and we are suspending it too,” said Putin. “All of our proposals in this sphere, as before, remain on the table. The doors for talks are open.” Among the weapons Putin said Russia would develop are the seabased Kalibr missile and hypersonic rockets. Putin said, however, that his country did not intend to launch a new arms race with the U.S. and would refrain from deploying medium-range missiles unless the U.S. did so first. The United States announced that it would suspend compliance after compiling evidence that Russia had flagrantly violated the treaty on a consistent basis. According to the U.S., Russia’s new SSC-9 rocket falls within the 500-5,500km (3103,400 miles) range outlawed by the INF. Secretary of State Mike Pompeo said that the U.S. has given Russia 60 days to resume honoring the treaty or the U.S. would pull out of the accords entirely. “Russia has not taken the necessary steps to return to compliance over the last 60 days,” noted Pompeo. “It remains in material breach of its obligations not to produce, possess, or flight-test a ground-launched, intermediate-range cruise missile system with a range between 500 and 5,500 kilometers.” Pompeo added that “the United States has gone to tremendous lengths to preserve the INF Treaty, engaging with Russian officials more than 30 times in nearly six years to discuss Russia’s violation, including at the highest levels of government.” The U.S. had signed the deal with Russia in 1987 amid concerns that the Soviet SS-20 rocket could devastate Europe. Within five years, 2,100 rockets were destroyed under the terms of the INF, which played a crucial part in ensuring that Europe remains free of nuclear weapons until today. First Female U.S. Fighter Pilot Dies Capt. Rosemary Mariner, the first woman in the United States to become a fighter pilot, passed away following a long battle with ovarian cancer last week. She was 65. Born in Texas to a U.S. Air Force pilot, Mariner got her pilot’s license at 17 and went on to break a slew of aeronautical records during her 24year career in the U.S. Navy. In 1973, she was a member of the Navy’s first all-female pilots’ course and became the first woman to become a fighter pilot a year later. In 1982, Mariner was the first woman to deploy on an aircraft carrier and rose to become the first woman appointed to command a tactical air squadron. In 1997, Mariner retired at the rank of captain after logging more than 3,500 flight hours in 15 different aircraft. The U.S. Navy honored Mariner’s accomplishments by performing an historic “Missing Man” flyover in her honor, a formation reserved to honor aviators who perished in combat. The five female pilots flew their F/A17 “Super Hornets” over Mariner’s grave in Tennessee on Saturday. The Jewish Home | FEBRUARY 7, 2019 27 28 FEBRUARY 7, 2019 | The Jewish Home pay with points travel planning hotels mileage redemption business travel SIGNUP FOR GET PEYD’S WHATSAPP AND BE ENTERED INTO OUR MONTHLY RAFFLE! of $3.5 million and could have caused 115 million overdoses. “This amount of fentanyl our CBP officers prevented from entering our country equates to an unmeasurable, dangerous amount of an opioid that could have harmed so many families,” Humphries said at a press conference. TE 516-350 XT SUBSCRIBE FOR OU -0838 AND STA TO R EXC Y TUN CREDIT ITING UPDATES ED CARD O FFERS. AND israel WHEN YOU THINK PEYD THINK CREDIT CARD REWARDS airline miles travel concierge credit card points consulting PHONE: (646) 801 - PEYD (7393) WHATSAPP: (516) 350-0838 INFO@GETPEYD.COM Mariner’s husband, Navy Cmdr. Tommy Mariner, said that his deceased wife would have enjoyed the first-ever all-female flyover but “certainly would not say that that component is necessary.” “It’s wonderful that the Navy can do that and it’s good that they have that many women where they can fill out all the cockpits with women,” he said. “But that would not be a requirement for Rosemary.” U.S.’s LargestEver Fentanyl Bust The U.S. Customs and Border Protection (CPB) made its largest-ever bust of the drug fentanyl when it recently intercepted a 254-pound shipment on the Mexican border. The deadly narcotic was concealed in the floor of a tractor-trailer that was transporting narcotics across the Nogales Crossing. CPB Port Director Michael Humphries says agents were alerted when the vehicle’s load was abnormally heavy and a drug-sniffing dog soon uncovered the stash. “They like to use vehicles with hidden compartments that they’re able to track between ports of entry,” said Humphries. Other than the fentanyl, agents also found a large stash of meth. The fentanyl shipment had a street value President Donald Trump praised the CPB for nabbing the stash in time, tweeting, “Our great U.S. Border Patrol Agents made the biggest Fentanyl bust in our Country’s history. Thanks, as always, for a job well done!” The previous biggest fentanyl interception by U.S. law enforcement occurred when the DEA nabbed 145 pounds in Queens back in August 2017. The seizure comes as fentanyl has become the biggest cause of overdoses in the U.S., surpassing other popular drugs such as OxyContin. In 2017 alone, fentanyl and similar substances were responsible for the deaths of 29,000 Americans, a rapid rise from the 3,100 lives the drug claimed in 2013. Central to the number of overdoses are the drug’s high potency; a bag the size of a sugar packet is sufficiently toxic to kill 500 people. The Drug Enforcement Agency (DEA) warns that “fentanyl is potentially lethal, even at very low levels. Ingestion of doses as small as 0.25mg can be fatal.” Fentanyl’s potency in small amounts is a boon for drug traffickers, who can easily ship the substance in the mail without fear of getting caught. U.S. Cracks Down on Birth Tourism U.S. federal agents arrested 19 people during a raid last week targeting “birth houses” that enabled foreign citizens to grant their children The Jewish Home | FEBRUARY 7, 2019 INSPIRING JEWS ... ONE BOOK AT A TIME FROM Rabbi Yechiel Spero From Two Bestselling Authors Rabbi Zelig Pliskin You keep your devices working in top shape. The amazing story of one of the greatest gaba’ei tzedakah of our times! NEW! Shouldn't you do the same for your life? NEW! By By Rabbi Yechiel Spero Rabbi Zelig Pliskin Daily upgrades for your thoughts feelings, words, and actions Rabbi Chaim Goldberg, the Chicago native who created an empire of chesed. R abbi Zelig Pliskin shows us how to transform our thoughts and feelings so we that we will instill a completely new pattern of optimism and confidence into our lives. R abbi Chaim Goldberg traveled the length and breadth of Eretz Yisrael, searching for those who needed help. He supplied rent money, heaters, medicines — and hope. With over 90 volunteers, and entrusted with millions of dollars, he improved, and often transformed, the lives of more than 30,000 families. This masterly biography by bestselling author Rabbi Yechiel Spero will transform and improve our own lives as well. Each of Rabbi Zelig Pliskin’s 101 “upgrades“ includes 4Insights from classic Torah sources and from Rabbi Pliskin’s extensive experience 4Practical tips to incorporate these upgrades into our daily routine 4Stories to show how people upgraded and improved their lives Small book. Huge message. W ho are you? What makes you — you? By Baila Vorhand PAPERBACK SIZE: 5¼" x 8" list price: 14.99 NEW! Live a life that matters — through thought, action, and dress Free to Be Me will help you find the answer to that life-changing question. And, amazingly, it will help you find the answer through the mitzvah of tznius, modesty — which is so much more than a dress code. Free to Be Me is practical, relatable and, with its light tone and great cartoons, it’s also just plain fun! Bulk pricing available for schools Bring the Parshiyos of Terumah/Tetzaveh to Life! The Torah’s description of the Mishkan COMES ALIVE! A masterpiece of imagery, clarity, insight, and detail! by Rabbi Avrohom Biderman Project dedicated by The Kleinman Family COMPAC T ENGLISH EDITION Also available as an interactive DVD: THE MISHKAN MULTIMEDIA COMPUTER PROGRAM Available at your local Hebrew bookseller or at • 1-800-MESORAH (637-6724) FULL-SIZE HEBREW EDITION 29 30 FEBRUARY 7, 2019 | The Jewish Home foreign citizenship by giving birth on American soil. Those detained are facing charges of immigration fraud and money laundering, as well as fraud. Prosecutors said that the scheme assisted pregnant Chinese women with coming to the United States under phony pretenses in order to give birth in the U.S. By giving birth on U.S. soil, their children would become American citizens under the 14th Amendment. The indictments detailed how birth tourism operators would coach their clients how to lie on interviews at the U.S. Consulate in Beijing in order to receive the necessary approvals. Customers were commonly told to say that they were only planning to stay in the U.S. for two weeks and instructed to avoid tight-fitting clothes in order to hide their pregnancies. Often, customers were given a guide titled, “Strategies to Maximize the Chance of Entry,” which listed strategies such as telling border officials that they had reservations at a “5 star hotel” such as “Trump International Waikiki Beach.” “These cases allege a wide array of criminal schemes that sought to defeat our immigration laws – laws that welcome foreign visitors so long as they are truthful about their inten- tions when entering the country,” said U.S. Attorney Nick Hanna. “Some of the wealthy clients of these businesses also showed blatant contempt for the U.S. by ignoring court orders directing them to stay in the country to assist with the investigation and by skipping out on their unpaid hospital bills,” he noted. The raid is considered the first time U.S. law enforcement has targeted foreigners who arrive in the United States for the purpose of obtaining citizenship for their children. In addition to the alleged immigration fraud, authorities said that the scheme posed a national security risk, as children with U.S. citizenship can enable their parents to receive a Green Card upon turning 21. Speaking with reporters, Department of Homeland Security official Mark Zito said that it was possible that China would have exploited birth tourism to flood the United States with foreign nationals. “I see this as a grave national security concern and vulnerability,” said Zito. “Are some of them doing it for security because the United States is more stable? Absolutely. But will those governments take advantage of this? Yes, they will.” U.S. Marines Cleared of War Crimes Charges Following a long 12-year battle to clear their names, a group of former United States Marines will be cleared of war crimes charges that have dogged them for years. On March 4, 2007, retired Marine Major Fred Galvin was leading his team of elite Marine Special Operations near the Afghani village of Bati Koti when they were attacked by a suicide bomber. Following the blast, Galvin and his men were involved in an intense firefight that left their six-vehicle convoy in flames, yet managed to repel the Taliban insurgents. After returning to base, they found that the clash had become an international incident after the Taliban claimed that U.S. troops had killed innocent civilians. Exhibiting bullet-riddled corpses they said were killed by the Marines along with bombed out ambulances, the Afghani insurgents exploited the firefight as proof of American brutality towards innocent villagers. Despite discrepancies in the Taliban’s story, the U.S. military condemned Galvin’s men in an apparent effort at damage control. A criminal investigation was opened immediately, and senior U.S. military officers leaked disparaging information about Galvin’s team to the media. A subsequent report compiled by the Pentagon found that the Marines had acted improperly during the firefight. Galvin was relieved of command, his team was kicked out of Afghanistan, and some operators even faced negligent homicide charges. The damage the fallout caused to the elite Marines was considerable, with some saying that it caused them to suffer from PTSD. In January, however, a new report completely exonerated Galvin and his men, capping off the long battle they waged to clear their names. According to the Board for Corrections of Naval Records, the initial criminal The Jewish Home | FEBRUARY 7, 2019 31 32 FEBRUARY 7, 2019 | The Jewish Home ב"ה w o n k e W s e s s e M at t r gan told the Washington Post. Morgan had taken part in a panel in 2008 that cleared Galvin’s men of wrongdoing yet watched as his findings were later buried by the Department of Defense. “Fred has finally come out on the right side of things, but it has come at a very steep price,” Morgan said. “The lies. The deceit. That makes me so mad. That kind of behavior doesn’t inspire confidence in the ethics of our military’s leaders. It corrodes public trust in the institution.” Super Bowl: Super Exciting or Super Boring? EST PR W ! $ NTEED RA E GUA C I LO Cedarhurst Brooklyn 5211 New Utrecht Ave. Brooklyn NY | 718.438.3933 Monsey 401 Rt. 59 (Atrium Plaza) Monsey NY | 845.414.9014 investigation that found evidence of wrongdoing was “inequitable and unjust” and completely cleared their records from charges of war crimes. The Naval Records Board also recommended that Galvin receive a retroactive promotion to Lieutenant Colonel, which he had been denied following the controversial gunfight. Such a promotion would grant the officer hundreds of thousands of dol- ... 126 Cedarhurst Ave. Cedarhurst NY | 516.792.1191 s l e e p t i g h t b e d d i n g . c o m lars in backpay. Galvin hailed the decision to clear his name following years of effort to revisit the facts of the case. “Speaking for the Marines and corpsmen of Marine Special Operations Company Foxtrot, the senior civilian leaders from the Pentagon who composed the board made a courageous decision and their 12-page report reflects their integrity and thoroughness from reviewing all of the facts in this case,” Galvin told the military newspaper Task & Purpose. The fate of the Marines following the 2007 bombing and resulting shoot-out had rankled members of the U.S. military community for years due to feelings that the senior command had thrown the operators. to the wolves. “This was a big betrayal,” retired Marine officer Steve Mor- While many football fans were left disappointed by the low scoring in Sunday’s Super Bowl, it appears that the game left advertisers disappointed as well. New data shows that the contest between the New England Patriots and the St. Louis Rams suffered from the lowest Super Bowl ratings in a decade. The sports match-up got only a 44.9% rating, the lowest ratings the Big Game has received since 2009. Despite suffering a 5% drop from last year, the game still pulled in a big audience, as a 49.9% rating translates into more than 100 million viewers. Preliminary ratings are notoriously inaccurate and only measure viewers watching from home, excluding central viewing places such as restaurants and bars, so these numbers may not be correct. The Nielsen ratings also do not include sports fans who stream the game from the internet, something that has become increasingly popular in recent years. Super Bowl 53 was seen by many as possibly one of the least interesting contests in NFL history, as the Patriot’s 13-3 defeat of the Los Angeles Rams was characterized by an anemic offense by both teams and frequent punts. Patriots quarterback Tom Brady The Jewish Home | FEBRUARY 7, 2019 earned his sixth Super Bowl ring with the win, a record for an NFL quarterback. The 41-year-old Brady also made history as the oldest quarterback to start a Super Bowl, beating out Peyton Manning, who won Super Bowl 50 at the age of 39. Brady has appeared in nine Super Bowls over his career, something that is unmatched by any other football player past or present. “It probably won’t sink in for a very, very long time,” Brady said following the victory. “I’m just so blessed to play with the best teammates through the years from our ‘01 team and all the way through now,” he added. “I love all those guys. That’s what makes this special, man. It’s a brotherhood. All these relationships are so important in my life, and I can’t cherish it enough. It’s going to be a celebration tonight.” 5 Killed in CA Plane Crash At least five people were killed after a plane broke apart midair over the suburban California neighborhood of Yorba Linda on Sunday. Besides for the pilot, four other people were killed when the house they were staying in went up in flames caused by falling debris. Lt. Cory Martino of the Orange County Sheriff’s Department told reporters that two of the deceased owned the home and were watching the Super Bowl with friends when a fire engulfed the structure. Television footage showed an airplane wing burning near the home while a neighbor attempts to extinguish it with a garden hose. Eyewitnesses said that the 1981 twin-engine Cessna 414 broke apart in midair and turned into a flaming fireball before hurtling to the ground. “The plane blew up about 100 feet off of the ground. The plane blew up in the sky,” Yorba Linda resident Jared Bocachica told KTLA news. “I come out.... It’s raining plane parts from the sky.” The small Cessna had taken off from Fullerton Municipal Airport shortly after 1:30 p.m. on Sunday and flew for 10 miles before suffering from engine trouble. National Transportation Safety Board (NTSB) officials said that the air- craft climbed as high as 7,800 feet before spiraling out of the air. The NTSB also said that the plane left a debris spread of over four blocks long and that it would take up to a year to carry out a full investigation. Hawaii: Ban Cigarettes from Anyone Younger than 100? The state of Hawaii has some of the most restrictive cigarette laws in the nation. In 2016, it became the first state to raise the age to buy the cancer sticks to 21. Now, a new bill introduced in the state’s House calls 33 for raising the cigarette-buying age to 30 by next year, up to 40, 50, and 60 each subsequent year, and eventually to permit cigarette sales to only those older than 100 by 2024. “The legislature finds that the cigarette is considered the deadliest artifact in human history,” the new bill states. In other words, by the time the bill runs its course, Hawaiians won’t be seeing any cigarettes on their stores’ shelves. Unless, of course, some tourist wants to bring them in. Rep. Richard Creagan, who is the bill’s sponsor 34 FEBRUARY 7, 2019 | The Jewish Home comply. Currently, most states allow 18-year-olds to buy cigarettes; four have raised the minimum age to 19. The bill notes that Hawaii “is suffering from its own addiction to cigarettes in the form of the large sums of money that the State receives from state cigarette sales taxes,” to the tune of $100 million annually. In 2015, the National Academy of Sciences released a report that argued that increasing the age to buy tobacco to 21 would have a “considerable impact” on the age at which someone takes their first puff. The report also suggested that “if someone is not a regular tobacco user by age 25, it is highly unlikely he or she will become one.” Creagan told the Hawaii Tribune-Herald he’s confident the bill will survive any court challenges, as the U.S. Constitution does not recognize smoking as a fundamental right. In 2012, a federal appeals court upheld a lower court ruling against a smoker who challenged an anti-smoking ordinance in Clayton, Missouri, on grounds that it violated his constitutional rights.. Trump’s 2017 act was not the sole cause of New York’s budget fiasco, Cuomo said. He added that the stock market drop in December may have had an impact and also struck a blow to the bonuses given to Wall Street traders, which adds a considerable amount to the state’s annual tax collections. Cuomo warned against imposing increased taxes on the wealthy in New York as a solution, which is already home to the nation’s second-highest taxes on millionaires, according to North Country Public Radio. “This is the flip side,” Cuomo said. “Tax the rich, tax the rich, tax the rich. The rich leave, and now what do you do?” (28 percent) say their job obsession is more than just a strong desire to succeed – it stems from financial necessity. The survey, commissioned by The Vision Council, also showed just how much the modern workaholic is looking at a computer, phone, or other digital device. The average participant was found to log 7.5 hours of screen time daily, though 35 percent say they spend more than nine hours each day focused on a screen. “The human eyes were not designed to look at digital devices – not to mention nearly as long as modern individuals do,” says Dr. Justin Bazan, practicing optometrist and medical advisor to The Vision Council, in a statement. “With Americans’ screen time hours nearing the double digits, and them spending their entire workdays – and more – on digital devices, it’s imperative that individuals take a serious look at the implications on the eyes, especially, as they’re the organs taking the brunt of all this screen time.” The survey was conducted by market research firm OnePoll. Cuomo: Rich Fleeing NYS Americans are Workaholics NJ Rain Tax . “A lot of our economy is based on, obviously, the shore. We gotta make sure we keep it that way,” former governor and current state Senator Richard Codey said. Homeowners are not too pleased with the extra tax burden. Some Republicans have dubbed the bill the “Rain Tax,” saying another tax makes New Jersey even more unaffordable. State Sen. Tom Kean Jr. agrees. “We all want to protect our environment. We all want to preserve it for future generations. But this is a weighted tax. The citizens of New Jersey…really with no oversight have no way to defend themselves against tax increases at local levels,” Kean said. Ford’s AntiSemitism Results in Firing This week, New York Governor Andrew Cuomo admitted that taxing the rich is not a good strategy for New York State. The Empire State is facing a $2.3 billion budget deficit, which the governor blames on the Trump administration’s tax reforms.. “We’ve set up reserves, but this A new survey finds that about half of employed Americans consider themselves to be workaholics. The average American says that they work four hours a week for free and think about their job another four hours. More than half – 53 percent – were stressed out from work, even while completing the survey. What makes these people workaholics? Researchers found that worrying about work on an off day, feeling too busy to take a vacation, and checking emails immediately after waking up (something 58 percent of the respondents say they do) were the top three symptoms of suffering from workaholism. But nearly three in ten people Now, when it rains in New Jersey, residents are going to pay. A new bill in the Garden State will tax residents and businesses more when the weather turns dreary. The bill calls for the creation of local or regional storm water utilities, giving local counties and municipalities the power to collect a tax from properties with large paved surfaces such as parking lots, CBS2’s Meg Baker reported, which includes businesses and homeowners. The bill passed in the Senate and the Assembly and is now headed to Gov. Phil Murphy’s desk. The Dearborn Historian’s editor-in-chief found himself out of a job last week after publishing a wide-ranging expose of auto pioneer Henry Ford’s anti-Semitism. Bill McGraw had written a 3,700word report on Henry Ford’s hatred for Jews in the magazine’s January issue. The article chronicled Ford’s admiration for Adolf Hitler and quoted Ford as saying that “the Jew is a race that has no civilization to point to, no aspiring religion, no great achievement in any realm.” McGraw also called out Detroit’s The Jewish Home | FEBRUARY 7, 2019 35 36 FEBRUARY 7, 2019 | The Jewish Home The Beth Din of America, Center for the Jewish Future at Yeshiva University, Rabbinical Council of America and Orthodox Union present a complimentary CLE program ISSUES IN CONTEMPORARY BETH DIN PRACTICE ·/11 SUNDAY,FEBRUARY17,2019 Young Israel of Lawrence-Cedarhurst, 8 Spruce St, Cedarhurst, NY 3 Professional Practice credits; 1 Diversity, Inclusion and Elimination of Bias credit 9:00-10:15 BETH DIN AS A PREFERRED FORUM Rabbi Mordechai Willig - The Prohibition of Litigating in Secular Court Rabbi Shlomo Weissmann, Esq. - A Typical Case at the Beth Din of America 10:20-11:35 Michael A. Helfand Professor and Associate Dean Pepperdine School of Law born, Michigan, home for over 30 years, said that he did not regret writing the article about Ford and alleged that the current political climate in the U.S. meant that Ford’s views should be examined more than before. ,” McGraw told the Detroit Free Press. “It seems if Dearborn is going to be proud of Henry Ford, we should look at the whole picture.” Benyamin Kaminetzky President, Beth Din of America Partner, Davis Polk & Wardwell LLP BETH DIN JURISPRUDENCE Rabbi Yona Reiss, Esq. - Secular Law in Din Torah Proceedings Jordana Mondrow, Esq. - Procedural Issues that Arise in Dinei Torah Jordana Mondrow Administrative Attorney 11 :40 -12:30 TESTIMONY IN BETH DIN (DIVERSITY CREDIT} Michael Avi Helfand, Esq. - Female Testimony in Beit Din: An Untold Story of Halachic Justice 12:30-1:00 Q&A WITH PRESENTERS Beth Din of America $35K Hair Rabbi Yona Rei.rs Chaver Beth Din, Beth Din of America Av Beth Din, Chicago Rabbinical Council MONDAY,FEBRUARY18,2019 Congregation Keter Torah, 600 Roemer Ave, Tean eck, NJ 3 Professional Practice credits; 1 ethics credit 9:00-10:15 HOW CASES ARE DECIDED Rabbi Mordechai Willig - Halacha and Secular Law: The Case of the Prenup Rabbi Shlomo Weissmann, Esq. - A Typical Case at the Beth Din of America 10:20-11:35 MATRIMONIAL MATTERS LEGAL REPRESENTATION IN BETH DIN (ETHICS CREDIT} Michael Avi Helfand, Esq. - Does Jewish Law Like Lawyers? Building a Just Legal System 12:30-1:00 Director Beth Din of America Rabbi Mordechai Willig Rabbi Yona Reiss, Esq. - The Ketuba in Contemporary Divorce Cases Benyamin Kaminetzky, Esq. - Legal Issues Relating to the BOA Prenup 11:40-12:30 Rabbi Shlomo Weissmann Q&A WITH PRESENTERS Segan Av Beth Din, Beth Din of America Rosh Yeshiva, Yeshiva University No RSVP required; To registerfor CLE credit visit. Any questions? Also streamed live, with CLE credit, at yutorah.org Open to the entire community. Light refre.rhmenu will be .rerved. CLE credit provided through the National Academy of Continuing Legal Education. Credit is available for CA, CT, IL, NJ, NY and PA. Other states may be available upon request. historical reluctance to focus on Ford’s hatred of Jews due to the role he played in building up the city. “In general, metro Detroit and its institutions tend to treat Mr. Ford gently when it comes to his dark sides,” McGraw wrote. “But his anti-Semitism is much more than a personal failing.” The report was too much for Dearborn Mayor John O’Reilly, who oversees the Dearborn Historical Society which prints the magazine. The mayor ordered the post office not to mail out the issue and summarily fired McGraw. O’Reilly said that Dearborn’s large Arab population rendered it too sensitive to be stirring up ethnic tensions in between groups. “It was thought that by presenting information from 100 years ago that included hateful messages — without a compelling reason directly linked to events in Dearborn today — this edition of The Historian could become NRTIONRL RC::ROEMY fnhonw1gl•w11/1Life OF CONTINUING LEGAL EDUCATION a distraction from our continuing messages of inclusion and respect,” he said in a statement. 35% of Dearborn residents are Arabs, and the city is seen as one of the largest Muslim-dominated cities in the U.S. In opposition, the Dearborn Historical Museum Commission called on the mayor in a non-binding resolution to overturn his order forbidding the magazines from being sent out. McGraw, who has called Dear- George Washington is best known for being the first president of the United States but now his hair is making headlines. This week, a lock of Washington’s hair sold at auction for more than $35,000. The hair was sold by the family of Alexander Hamilton, who served under the first president of the United States as secretary of state. The final bid for the hair was $35,763.60. Also included with the piece of hair was a letter dated March 20, 1870 from James A. Hamilton, Alexander Hamilton’s son. The auction house said the strands that were sold are about 5.31 inches long, are gathered together with string, and are affixed to a card with sealing wax. The New Jersey-based auction house said the lock is unusually hefty, unlike other locks of hair from famous people which can break down over time. “Generally, we have stayed away from ‘hair’ because of the lack of strands and insufficient authentici- The Jewish Home | FEBRUARY 7, 2019 ATTENTION 10th GRADE GIRLS! It’s never too late to start your Sternberg journey... A SUMMER THAT WILL CHANGE YOUR LIFE! SHMACAMPS.COM . 516.992.6131 . INFO@SHMACAMPS.COM 37 38 FEBRUARY 7, 2019 | The Jewish Home ty,” the auction house, Lelands, said in a lot description on its website. “This piece is the exception.” Let’s hear it for the hair. Tunnel Thieves Although it wasn’t titled “Ocean’s 11,” this caper seems like it should be something in the movies. On Sunday, authorities were baffled when they arrived at the BNP Paribas Bank in Antwerp, Belgium, in response to a security alarm going off. When they came to the bank, nothing seemed out of the ordinary. The vault doors were secured, and there was no sign of any activity. But looks can be deceiving. Upon entering the vault, author- ities were greeted by 30 empty deposit boxes, a hole in the floor, and a tunnel reaching into the city’s sewer system. For now, the robbery is under investigation, as authorities attempt to piece together what exactly was stolen. The tunnel was dug from a home several hundred meters away and into the sewage system. From there, the thieves then had to make their way through tight sewers – less than a meter wide – towards the bank, while risking the prospect of being flushed with water or harmful vapors. Upon arriving at the bank, the criminals had to dig another tunnel to get into the vault of the bank. The bank is located in the famed largest diamond district in the world that claims an annual turnover of $54 billion. This story gives new meaning to the saying “A diamond in the rough.” Vanilla Ice How long will the ice last? That’s the $500 question in Vermont. Officials have launched an annu- e info@pickpurple.org w We now accept clothing, shoes, accessories, linen and towels in usable condition al contest where contestants guess how long the ice will last on Lake Memphremagog. For its “Ice Out” contest, Newport Parks and Recreation will put a large, wooden depiction of a bottle of vanilla extract on a platform attached to a time clock. It will record when the facade, called “Vanilla Ice,” drops into the water. or $5 for two or $10 for five tickets. Each ticket bears a date and time that the contestant predicts will be the time the wooden vanilla sculpture will plunge into the lake. If the fake vanilla bottle crashes through the ice at that time, the winner gets the cash. The deadline to submit predictions for this year is April 1, or when the ice goes out. Sounds nice to get a prize for a slice of ice. Durian, Durian The lucky person who predicts the closest time will win 50 percent of the contest pool, which usually totals around $500. The rest of the proceeds will benefit the Gardner Memorial Park Playground and Splashpad project. Tickets are going for $3 a ticket 201.47.purple An Indonesian variety of the durian – a pungent, spiky fruit considered a delicacy across many parts of The Jewish Home | FEBRUARY 7, 2019, the farmer who grew the fruit. Two of the rare durians, which were displayed in a, describing the texture as creamy like butter. Sudarno said of the 20 durians that grew from his tree, only four were able to be sold. Two of them had to be discarded before being sold after they rotted. Durian are often grown in family orchards or small-scale farms and are hugely popular in many parts of Asia. But despite their delicious taste, durians are known for their horrible odor, often smelling like an open sewer or turpentine when ripe. The fruit is banned in some airports, on public transportation and in some hotels in Southeast Asia because of the smell. Sounds like they’re going bananas over a piece of fruit. He’s a Snake Want to name someone special after a slimy reptile? An Australian zoo is offering people the opportunity to name a snake after someone whom they feel is perfectly fitted for the reptile. The Wild Life Sydney Zoo launched a competition to name a brown snake, one of the most venomous species in the world, in honor of the winner’s least-favorite friend. The competition website calls on entrants to make a $1 donation to the zoo and to explain in 25 words or less why their former friend deserves to have a snake share their name. “Not only will you know that your ex has a snake named after them, but you will also receive a certificate and the opportunity to visit the snake for FREE every day for the next year,” the website boasts. The winning entry, which will be chosen by the zoo’s reptile zoo, will be announced by February 14. Ssss-sounds ssss-spectacular. Sausage Hotel Love sausages? Head to the world’s first sausage hotel. Claus Boebel, a fourth-generation butcher in Germany, is celebrating his favorite food with the opening of the bratwurst bed-andbreakfast. He opened it near Nuremberg in a converted barn adjacent to his family butcher shop. Boebel Bratwurst Bed and Breakfast, which features sausage imagery in nearly every aspect of the decor, has attracted guests from around the world during its first four months of operation. “I want to bring tourists from all over the world to Rittersbach, my home village,” the 48-year-old explained. The restaurant on-site is not for vegetarians. It features one dish: bratwurst, served in many different styles. ,” he said. Guests who head to the hotel are bombarded with sausage-themed wall art, pillows and other meat-related decorations. Sounds like the “wurst” hotel ever. 39 40 FEBRUARY 7, 2019 | The Jewish Home Around the Community HaRav Dovid Schustal Inspires All at the YOSS Motzei Shabbos Learning Program T his past Motzei Shabbos, Yeshiva Toras Chaim - Bais Binyamin at South Shore was privileged to host the Lakewood Rosh Yeshiva, HaRav Dovid Schustal, shlita, at its Motzei Shabbos Learning Program. The large crowd of tinokos shel beis raban and their fathers (and in many cases, grandfathers, as well) had the great zchus of seeing Rav Schustal and hearing his penetrating message overflowing with ahavas haTorah and ahavas Yisroel. Rabbi Avraham Robinson, menahel of the elementary division of the Yeshiva, in his introduction described the lineage of some of the gedolim responsible for building Torah in America. Namely, Rav Yaakov Kamenetzky, zt”l, whose son, Rav Binyamin, zt”l, founded Yeshiva of South Shore, and Hagaon Rav Aharon Kotler, zt”l, who was responsible for planting Torah in America by founding Beis Medrash Gavoha, Lakewood, where Rav Schustal is one Rav Schustal with Rabbi Robinson and Rabbi Drebin of its Roshei Yeshiva. Rav Schustal was beaming as he looked into the crowd of kinderlach and expressed his delight at seeing such a beautiful gathering of teiyera neshamos filling the Beis Medrash learning Hashem Yisbarach’s Torah! His message to the olam was crystal clear. “How fortunate we are,” he thundered, “that we are Hashem’s chosen nation, the Am Kadosh, different than the nations around us. Hashem chose us to receive the most special gift ever to be presented in the entire history of the world, His precious Torah!” Yeshiva of South Shore has a learning program each Motzei Shabbos throughout the fall, winter and early spring. Attendees say it’s the highlight of their week. To accommodate as many lomdim as possible, the Yeshiva has three satellite locations: the Shteeble in Cedarhurst, Kehillas Tiferes Tzvi and its newest location at Sha’arei Emunah. This week’s program is one that will not soon be forgotten by the many who were fortunate enough to be in attendance. It was a night that our children witnessed the grandeur of Torah. After the program concluded, the boys and fathers lined up to give shalom to Rav Schustal and to receive his warm brachos. The Jewish Home | FEBRUARY 7, 2019 41 42 FEBRUARY 7, 2019 | The Jewish Home Around the Community Rav Yerucham Olshin, Rosh Yeshivah of Bais Medrash Govoha, visited Mesivta Shaarei Pruzdor this week. He gave divrei bracha and davened Mincha with the boys. Rambam Rallies Against Abbas T hree days before final exams were about to end and vacation was about to begin, Rambam received a call from a well-known Jewish activist, Sander Gerber, saying, “Abbas is going to speak and be feted at the UN in 72 hours.” Mahmoud Abbas, the president of a self-proclaimed Palestinian State, was invited by a group of 135 nonaligned countries which represent 80% of the world’s population. That group voted unanimously to name Abbas as the president of their group since they recognized him as the president of the State of Palestine. The plan was to confer legitimacy upon Abbas, hailing him as a leader these 135 countries, which would pave the way for Palestinian Statehood on the world stage. Palestinian media reported this as an important step towards eventual UN recognition of the Palestinian State. Mr. Gerber and Rambam recognized the need for immediate action in response to this turn of events. Arrangements were fast-tracked, and the final exam schedule was rearranged so that the Rambam talmidim could speak out against injustice. A rally was planned for Tuesday, January 15 at 1PM, to coincide with the exact time that Abbas was scheduled to announce being proclaimed as leader of this world organization. Students arrived by the busload and soon heard from Rabbi Friedman who introduced the program by thanking the NYPD and led the stu- Stuart Force, Taylor Force’s father, is seen in the foreground dents in the chant of “G-d Bless America.” When word that Abbas’ motorcade had arrived, chants quickly turned into “Kick him out,” “No more pay to slay,” “Your hands are drenched in blood,” and “Stop the lies stop the hate.” After brief remarks, Rabbi Friedman introduced Mr. Stuart Force, whose son, Taylor, was brutally slain by a Palestinian terrorist in 2016. Taylor graduated West Point and served a tour of duty in both Afghanistan and Iraq. Upon completion of his service he enrolled in a master’s program at Vanderbilt University and was visiting Israel on an educational program when he was stabbed to death. His killer was shot by police, but under the Abbas administration was accorded martyr status which entitled his family to a lifetime of bonus payments for their son’s “work.” Mr. Sander Gerber contacted Stuart and his wife, offered his heartfelt condolences, and began to work on and implement a plan to reduce and even eliminate American funding of the Palestinian Authority. With nonstop commitment they were able to pass legislation aptly named “The Taylor Force Act,” effectively cutting back on funding of the PA. Stuart spoke passionately about his son and about the fact that terrorists are being rewarded and encouraged to kill by Abbas. Bassem Eid, a noted Palestinian human rights activist, spoke next. He openly criticized Abbas for misappropriating and diverting funds from building hospitals, schools, and infrastructure to payments of “Pay to Slay.” Erin King Sweeney, Councilwom- an from the Town of Hempstead, talked about the unique U.S./Israel relationship and criticized Abbas and the UN. Councilman Bruce Blakeman closed the program by saying, “Taylor’s spirit continues to live on and saves American lives by the legislation passed in his memory.” He thanked and lauded Mr. Eid, a Palestinian Muslim, “for his courage and speaking out against the Abbas regime.” The rally concluded with the direct message to the participants upstairs who had invited Abbas. Students chanted, “You let terror win when you let Abbas in.” The rally attracted both local and national news outlets and emphasized the value of the Jewish obligation to speak out against anti-Semitism. The Jewish Home | FEBRUARY 7, 2019 43 44 FEBRUARY 7, 2019 | The Jewish Home Around the Community Dedication to Ari Dovid Block, z”l The game’s MVP F or those of you who knew Ari Block, z”l, you probably knew that he was serious about learning and Torah. However, he also loved basketball, but we’ll get to that shortly. Ari, z”l, was born in La Palma, California, and raised in Phoenix, Arizona, where he attended the Phoenix Hebrew Academy (PHA) for elementary school, a true “out-of-towner.” Like many out-of-towners, he found himself in New York in 2001, after a year and a half in yeshiva in Eretz Yis- roel. He spent several years in Yeshiva Sh’or Yoshuv. In 2006, he married Faigy Ludmir from Brooklyn. At that time, he was learning in the kollel and the smicha shiur in Sh’or Yoshuv while earning his master’s degree in Special Education from Touro College. In April of 2007, they had a son, Avrohom Yeshaya Block. In May of 2007, 25th of Iyar, Ari Block, z”l, left us very suddenly, yet, he left a lasting impression on all of his family and friends. When Ari's son cutting the court's ribbon while Ari's parents look on a family sits shiva, they get to hear story after story of how the person who left this world has impacted so many people. It is difficult to fathom the extent of influence a person has on other people. Recently, Ari Block’s effect on others was made clear. A fellow alumnus of Ari’s from the Phoenix Hebrew Academy spearheaded a campaign throughout the Phoenix Jewish community to dedicate the school’s newly renovated basketball court to Ari’s memory. This was a court that produced a lot of memories for Ari and his family. Ari’s father, Dr. Robert Block, was the coach for the PHA’s Thunderbolts for many years, and Ari was named Most Valuable Player in 1996. Ari was a great basketball player and, as the donor plaque, reads, “This new court will serve as inspiration for our students to play like Ari did: with courage, integrity, and a winning spirit.” On January 27, 2019, about 300 people gathered together at the PHA for a beautiful basketball court dedication ceremony. There were many speakers which included Dr. Robert Block, Noah Goldstein (Ari’s brother-in-law), Rabbi David Rebibo (dean of the PHA), and Rabbi Weiner (principal of the PHA), among several others. As Dr. Block noted in his address, “Ari’s greatest achievement was his transformation in leading, assisting, and scoring in his hasmada and absorption of Torah and mitzvos.” There was a touching ribbon-cutting ceremony, cut by Ari’s, z”l, son, Avrohom Yeshaya Block, who currently attends Yeshiva Darchei Torah in Far Rockaway. Lastly, they held an alumni basketball game, the “older” versus the “younger” alumni, Dr. Block coaching the “older” alumni. That’s right, the “older” still got it and beat the “younger” 29-26. At the end, Avrohom Yeshaya Block was unanimously voted “Most Valuable Player” by both coaches and all players on the very court that his father, Ari, z”l, played so many games, made so many memories, and won the same award years ago. The Jewish Home | FEBRUARY 7, 2019 45 46 FEBRUARY 7, 2019 | The Jewish Home Around the Community Rav Yerucham Olshin, Rosh Yeshiva Beth Medrash Gavoha of Lakewood, on his recent visit to Yeshiva Ketana of Long Island Chaverim Member Appreciation Dinner O n Thursday, January 29, over 40 members of Chaverim of Five Towns and Rockaways gathered at their annual Member Appreciation Dinner held at Traditions Eatery in Lawrence, NY. The dinner celebrates the Chaverim participants, all of whom are volunteers who give hours of their time to help motorists in need. Before the delicious buffet dinner, Hempstead Town Councilman Anthony P. D’Esposito addressed the members and described his great admiration for Chaverim. In his own words, “Imagine a frigid cold night, much like tonight, and you experience a flat tire, dead battery, run out of gas or lock your keys in the car or home. What would you do? Thankfully, here in the Town of Hempstead, we are blessed with Chaverim of FiveTowns and Rockaways – a volunteer organization with over 75 members – people PHOTO CREDIT: THOMAS S STANWOOD, PINCHAS LIPSKY AND NAFTOLI FEITMAN with huge hearts who leave the comfort of their home to help their neighbors. With my career in the NYPD and my years as a member of the Island Park Volunteer Fire Department, I can truly appreciate and understand the work these individuals do”. He continued, “I entered into this room, which is packed with volunteers – those who respond to incidents and those who man the dispatch center 24/6 – to simply say THANK YOU!” A special mention of appreciation to Rabbi Zvi Ralbag, rav of Congregation Bais Ephraim Yitzchok, who gave an inspiring dvar Torah to the members about the importance of the work they do for our community. In addition, Cedarhurst Mayor, Benjamin Weinstock spoke to the members and gave Chaverim a plaque. We would also like to thank Village of Lawrence Mayor Alex Edelman and Cedarhurst Trustee Israel Wasser Binyamin Lipsky, Shlomo Feldman, Zevi Goldstone, Ahron Slater and David Sharifan (L-R) Cedarhurst Trustee Israel Wasser, Binyamin Lipsky, Chaverim Coordinator, Rabbi Mayer Kramer, Chaverim Founder, Cedarhurst Mayor Benjamin Weinstock, Hempstead Town Councilman Anthony P. D’Esposito and Lawrence Mayor Alex Edelman for stopping by and showing their support for Chaverim. Another award was presented to Unit F27, Shlomo Feldman, who went above and beyond the call of duty this past year. Following the awards and speeches was the highly anticipated entertainment, featuring mentalist Alain Nu. He performed some mind-boggling mentalism and interactive games which left everyone in hysterical laughter and amazement. A great time was had by all. Most importantly, the members came away inspired, knowing that what they do makes a real difference in the community and is greatly appreciated. 2018 was a very busy and successful year for Chaverim. We received over 3,000 calls - a record for Chaverim, with over 90% of a response within 5 minutes or less. We also run Defensive Driving Class programs and car sseat safety inspection events throughout the year. Chaverim is also planning to begin a program to teach many basic car functionalities to new drivers in high schools throughout the community. The upcoming defensive driving class will take place on February 24 at 7p.m. at the Marion & Aaron Gural JCC, 207 Grove Avenue in Cedarhurst. To sign up, email info@chaverim5t.org or call 516-331-1460. Hempstead Town Councilman Anthony P. D’Esposito speaking The Jewish Home | FEBRUARY 7, 2019 MACHON SARAH HIGH SCHOOL Torah Academy for Girls CONCERT Proudly Presents Our Biennial A Musical Journey Through the Generations ”דור לדור ישבח “מעשיך at Lawrence High School 2 Reilly Road, Cedarhurst, NY Motzaei Shabbos, February 23rd, 2019 @ 8 PM Sunday Matinee, February 24th, 2019 @ 12:30 PM Sunday Evening, February 24th, 2019 @ 7:30 PM To purchase tickets go to tagconcert.showclix.com To place an ad in our Playbill, email us at tagconcert2019@gmail.com 47 48 FEBRUARY 7, 2019 | The Jewish Home Around the Community Oncology Doctor Runs for Team Lifeline in Memory of Beloved Patient E very once in a while, someone comes into your life and changes you forever. For Dr. Andrew Silverman, oncology fellow at Children’s Hospital of New York Presbyterian at Columbia University Medical Center, that someone was a boy named Eli. Miami Half Marathon together in memory of Eli and to make a difference in the lives of other children and families living with pediatric illness. Miami Marathon/ Half Marathon, the Rock ‘n’ Roll Las Vegas Marathon/ Half Marathon/ 10K, and NYC Marathon. To learn more about Team Lifeline, visit www. teamlifeline.org. MTA Takes Second Place at CIJE Hack-a-Thon M TA took second place at the Center for Initiatives in Jewish Education (CIJE) Hack-A-Thon. The event, held on Wednesday, January 30 at Yeshiva University, presented teams of students from 13 different schools with the challenge of developing technology to assist the elderly at the Jewish Home at Rockleigh. Each team had just 4.5. The Jewish Home | FEBRUARY 7, 2019 Around the Community An O-Fish-al Chessed PHOTO CREDIT BENJAMIN KANTER BREATHTAKING? ORDER YOURS TODAY! Shea Langsam, owner of Fish to Dish, Assemblymember Simcha Eichenstein, and Yossi Heimen, owner of Yossi’s Fish Market O n Sunday, February 3, Assemblymemberuary 12, Yossi Heiman, owner of Yossi’s Fish Market located chessed, the act of offering a direct competitor into your own storefront truly goes above and beyond,” said Assemblymember Simcha.” The citation reads as follows: “Citation is presented to Shea Langsam Fish to Dish in recognition of your kindness for opening your doors to a neighboring merchant, Yossi’s Fish Market, after a fire destroyed their location on January 12, 2019. This act of welcoming a competitor into your workplace goes beyond what is expected of a business person. This exemplifies what we should all strive to be like as New Yorkers. It is with great pride, as a Member of the Assembly from the 48th District, that I call attention to your kindness and thank you on behalf of the entire community. Simcha Eichenstein, Member of Assembly” Timeless designs and fine craftsmanship are characteristic trademarks that set Today’s kitchen apart from all others. Whether your plans include a new home or a home improvement remodeling project, we welcome the opportunity to create the design of your dreams and turn it into reality. 20 Years Experience Free Remodeling Tips Today's Kitchen 202-a Rockaway Tpke, Cedarhurst, Ny 11516 P (516) 371-1100 • F (516) 371-1101 49 50 FEBRUARY 7, 2019 | The Jewish Home Around the Community HALB Sukkah Fair Rabbi Medetsky’s sixth grade class at Yeshiva Darchei Torah enjoyed the unseasonably warm temperature on their trip to Seasons Express this week HAFTR Hawks Score T W hat a weekend for HAFTR Athletics! The JV basketball team traveled to Boca Raton this past weekend to play in the 2nd annual Katz Yeshiva/Step It Up JV basketball tournament. After beating the #1 seed host team, the Katz Storm, in the semifinals on Friday, the table was set for the Championship game on Saturday night vs. the Frisch Cougars. In the finals, HAFTR found themselves down by 10 midway through the 2nd quarter. But the Hawks went on an impressive 18-3 run to go up 5 at the half and set up what was to be an exhilarating second half. The entire second half saw the lead change hands numerous times with the Championship trophy up for grabs. Up by 1 point with 12 seconds to go, the Hawks had a defensive lapse and gave up the go-ahead basket to the Cougars. After a Hawks timeout and 5 seconds remaining on the clock, the ball was in-bounded to Haimy Salem who took it in strong and hit the game winner as time expired. With the game and tournament over, HAFTR once again proudly and deservedly hoisted the Championship trophy. Congratulations to our JV boys who won this tournament for the second year in a row and to Haimy Salem who was named tournament MVP. This was an exciting experience for the boys and, once again, the HAFTR Hawks not only showed their strengths on the court, but also their middot and sportsmanship in their behavior throughout the tournament. Special thanks to Coach Ari Witkes for all his hard work for the team! his past Sunday, HALB 5th graders participated in the 10th annual Sukkah Fair. After learning the intricate laws of Sukkah, the boys constructed their own models. The models were based off of different cases that are brought up in the 1st and 2nd perakim of masechet Sukkah. The effort and attention to detail was obvious to all in attendance. Sukkot of all shapes and sizes were built along with information boards and PowerPoint presentations. Mr. Altabe addressed the crowd with the emotional story of U.S. Army Chaplain Colonel Jacob Z. Goldstein building a sukkah on an army vehicle during the days that followed the 9/11 attack at Ground Zero. The students and parents connected to the story and appreciated the timeless message of the trust we have in Hashem when we sit outside in the temporary home of our succahs. The children also had the opportunity to reflect on the halachot of sukkah and that attention to detail is crucial. Thank you to Rabbi Steinberg and Rabbi Lieberman for putting together such a wonderful program. A straw mattress, wet carpet, and a hijacking Hershel and Pesi’s trip to Russia Page 90 The Jewish Home | FEBRUARY 7, 2019 51 52 FEBRUARY 7, 2019 | The Jewish Home Around the Community Over 1,000 Attend Grand Opening of Kāmin Health Urgent Care at OHEL Ganger Family Medical Center L-R: Mel Zachter, OHEL Co-President, Sonny Ganger, and Yussie Ostreicher at the Ganger Family Medical Center Ribbon Cutting Ceremony W ith much fanfare, over 800 members of the community attended the Grand Opening Day of The Kāmin Health Urgent Care at OHEL at the Ganger Medical Center, held on Sunday, January 13, 2019. Set to a backdrop of carnival- like entertainment, games and a beautifully orchestrated concert, Sonny and If the repairs to your vehicle cost 75% of the car’s total value, your insurance company MUST pay you the FULL value of your vehicle and NOT repair it. Don’t allow the insurance company to negotiate a lower-priced repair in order to save themselves money! We work for you, not your insurance company. If your vehicle is involved in an accident, bring it to us FIRST to ensure that we negotiate fairly on your behalf and don’t repair a vehicle that should not be repaired. Don’t gamble with your safety or risk your investment in your vehicle. R-L: Nachum Weingarten, OHEL Medical Director; Assemblyman Simcha Eichenstein; Barbara Kaminetzky; Montee Kaminetzky, Kāmin Health Urgent Care; David Mandel, OHEL CEO; Marc Katz, OHEL COO Shani Ganger and their family led a ribbon cutting ceremony of the new medical center. “It’s not just a medical appointment, it’s a relationship.” Nachum Weingarten, OHEL Medical Director, defines the underlying approach of care. “Our Center provides state-ofart urgent care to individuals of all ages, from pediatric to adult to eldercare medical services.” Kāmin Health Urgent Care at OHEL serves the entire Flatbush/ Midwood and surrounding community’s urgent medical needs. While the Grand Opening has passed, there is still the opportunity for members of the public to simply walk-in and preregister and receive a voucher for a free pizza and soda at “J II” for every person registered. The benefit of preregistration is that it ensures that whenever one needs urgent care services, one can be seen in minutes. David Mandel, CEO of OHEL Children’s Home and Family Services, reflects, “The provision of medical services now available at OHEL represents yet another milestone as OHEL begins to celebrate its 50th anniversary. These medical services now ensure that the community can benefit from the most advanced healthcare under one roof at the new OHEL Jaffa Family Campus in Flatbush.” The campus is located at 1268 East 14th Street, Brooklyn, NY. Kāmin Health Urgent Care at OHEL is open daily from 8:00 am until 9:00 pm with convenient hours on Saturday night and Sunday. No appointments are needed. Free parking is offered. The center works on a walkin basis and patients are seen within minutes. Kāmin Health Urgent Care at OHEL prevents unnecessary visits to the emergency room. Urgent Care Services include the treatment of illnesses such as strep, flu, viral infections, MRSA infections, and UTI’s. The center also does stitches, blood work, x-rays and vaccinations such as flu and tetanus shots. From the moment one enters the center and receives the warm reception, one will immediately appreciate the personal quick service, state-ofthe-art equipment, x-rays and medical devices to ensure that patients receive the best medical care. The waiting room boasts a free beverage center with gourmet coffee, tea and ice cold bottles of water for patients to enjoy free of charge. There is also an impressive children’s play area which keeps children busy during their short stay. There is no need to switch one’s PCP before a visit to Kāmin Health Urgent Care at OHEL and most insurances are accepted. If an insurance is not accepted, self-pay rates are very affordable. For questions or comments about Kāmin Health Urgent Care at OHEL, please contact Debbie Gorin at 718686-3344 or email dgorin@kaminhealth.com. The Jewish Home | FEBRUARY 7, 2019 53 54 FEBRUARY 7, 2019 | The Jewish Home Around the Community SKA Keeps Up Torah Inspiration i-Shine at Make It Too Over Winter Break By Devora Schreier SKA ‘19 O ver the past winter vacation, students of the Stella K. Abraham High School for Girls had the opportunity to keep the Torah “spark” alive with a special learning initiative. The SKA SPARKS committee, a Torah lishma student program, with the help of Rabbi Isaac Rice, head of the Torah She’baal Peh Department, organized a WhatsApp group which sent shiurim from many different speakers each day of the break, as well as a Mincha reminder, safeguarding Torah inspiration over vacation. Approximately 150 SKA students joined the chat and took part in this amazing project. It was really beautiful way to enable students to learn to bring Torah into their everyday lives, and im yirtzah Hashem, this creative spark will carry us through an extraordinary second semester! F or the past 11 years, HAFTR has hosted i-Shine, Chai Lifeline’s afterschool program for children living with illness or loss in the family. When a recent fire caused a temporary closure in the school, Sharona Hoffman, owner of Make It Too in Cedarhurst, stepped in and offered to host the entire group in her store. The children enjoyed ceramic The SKA SPARKS initiative WhatsApp group and canvas painting, soap making, face painting and more. “Our i-Shine children had a blast and we are so appreciative to Sharona Hoffman for her huge heart, generosity and for always going above and beyond for Chai Lifeline,” said Andy Lauber, director of i-Shine Five Towns. Science and Torah at HANC Rabbi Ehrenfeld’s Bioethics class tested themselves with PTC strips in their study of genetic screening By Courtney Isler, Senior H ANC provides a well-rounded education and continuously looks for innovative and unique learning opportunities for its high school students. Over the past few years, the limudei kodesh department has created elective courses based on student and teacher recommendation that focus on relevant and interesting topics in Judaism. Courses include Jewish Snopes, Living as a Jew 24/7, Creating Your Best Life, Women in Halacha, Business Ethics, Bioethics, and the like. This semester, a group of junior and senior girls began a course in Bioethics and Halacha. The course, taught by Rabbi Etan Ehrenfeld, will explore many important and contemporary topics in the field of bioethics and how they intersect with halacha. Sources are introduced from a multitude of Torah scholars to further discuss the topics and the conflicts surrounding them. Some of the topics include organ donation, vaccinations, and surrogate motherhood. The course exemplifies how the worlds of Torah and science intersect in our classrooms and in the lives of students. The Jewish Home | FEBRUARY 7, 2019 TALMUD & MISHNAH SALE n Ryzman Editio Edition Schottenstein I TALMUD BAVL brew Ed.* English and He Edition Schottenstein TALMUD I M SHAL YERU brew Ed. English and He HAM YAD AVRAAH MISHN HEBREW MISHNAH Schottenstein Edition MISHNAH ELUCIDATED NOW THROUGH FEB. 18, 2019 LIST PRICE *The individual paperback volumes of the Schottenstein Travel Edition Talmud are not included in sale. BONUS Order a complete set of the Schottenstein Talmud and SAVE 30% — PLUS SHIPPING IS FREE! The SchoTtenstein Edıtıon Mıshnah elucidated מסכתות TRACTATES נגעים פרה TOHOROS MIKVAOS Dedicated by Jay and Jeanie Schottenstein Volume dedicated by A group dedicated to disseminating Torah The Ryzman Hebrew Edıtıon of Mıshnayos Dedicated by Mr. and Mrs. Zvi Ryzman INCLUDES FULL-COLOR ILLUSTRATIONS of Los Angeles Volume dedicated by Lefkowitz Familiy THE ULTIMATE GIFT! New iPad pre-loaded with the complete ArtScroll Digital Library ARTSCROLL DIGITAL LIBARY IPAD list price: $1,500.00 regular price: $999.99 This pre-loaded iPad does not need internet to operate or activate $ now only 899 4Includes a fully functional iPad 9.7” Wi-Fi 32GB 2018 model that could be used as a regular iPad 4Includes a magnificent leather cover INCLUDES THE COMPLETE SCHOTTENSTEIN ENGLISH TALMUD PLUS: Schottenstein Edition Mishnah Elucidated • Kleinman Edition Kitzur Shulchan Aruch • Kleinman Edition Daily Dose Series 2 and 3 • Wasserman Smart Siddur • Jaffa Edition Tanach • Jaffa Edition Mesillas Yesharim • Keilson Edition Tehillim Hebrew • Complete Hebrew Text of Rambam •Zilberg Digital search available volumes in the Schottenstein Hebrew Talmud • and more Available at or special order at your local Hebrew bookseller. Available at your local Hebrew bookseller or at • 1-800-MESORAH (637-6724) 55 56 FEBRUARY 7, 2019 | The Jewish Home Around the Community Rabbi Yaakov Yankelewitz, Rosh Yeshiva of Yeshiva Chaye Olam in Monsey, giving a shiur to the bochurim in Mesivta Shaarei Chaim this week Shevach Students Awarded Honorable Mention in Project Witness Nationwide Competition T his past October, in commemoration of the 80th anniversary of Kristallnacht, Project Witness announced a nationwide competition for middle and high school yeshiva students. The focus was to be the themes of “Rebuilding from Destruction,” “The Power of Tefillah” and “The Kedusha of our Shuls.” Students were encouraged to create original works of literature, art, craft, and music, and the judging took place on Asara b’Teves. Twenty-two students from Shevach High School participated in the competition and submitted pieces in various categories. The projects reflected great effort, creativity and deep reflection on the themes of the competition. Prior to the nationwide deadline, Shevach held its own inhouse competition with an exhibit and judging of the submissions by the Shevach faculty. An impressive nine Shevach students were awarded Honorable Mention in the nationwide Project Witness competition: Nechama Ribowsky and Hadassah Gottesman for craft; Liora Karshigi, Esty Altman and Leah Scheiner for art; and Rikki Friedman, Ruchie Kops, Esti Levant, and Rivky Tannenbaum for literature. The winners of the Shevach competition included” Grand Prize winners Elinor Murdakhaev and Rachel Yakubov, for their model of the Baden Shul that was destroyed during Kristallnacht; first place winners Rikki Friedman, Esty Altman, Leah Scheiner, Rivky Tannenbaum and Esti Levant; and second place winners Aviva Keller, Ruchie Kops, Hadassah Gottesman, Nechama Ribowsky, Liora Karshigi, Batyaya Kateyev, Geulah Pinchasov and Rochel Wagner. Ms. Sara Nasirov, a history teacher at Shevach, served as the school’s liaison for the nationwide competition. She worked with the students Get ready to laugh about liver Page 128 to provide feedback and coordinated the school’s in-house exhibit and judging guidelines under the direction of Mrs. Nechama Mirsky, Shevach associate principal of general studies and lecturer for Project Witness. As a follow-up to the competition, students shared some of the takeaways they gained from participating in the competition. For some, it was an opportunity to gain a greater understanding of both the destruction and the courage it took for survivors to rebuild and the tenacious survival of the Jewish nation throughout history. Esti Levant expressed how working on her project encouraged her “to think about and gain a deeper awareness of the strength that it took for survivors to rebuild Jewish life after the devastation of the Holocaust.” Liora Karshigi wrote, “Sometimes I feel like pictures can describe emotions better than words. I chose to submit a charcoal drawing of a shul that was ransacked by the Nazis, ym”sh, during Kristallnacht to try to capture the dark, depressing atmosphere of how the Jews must have felt at that time. I also submitted a drawing of a Magen David with shuls that were destroyed during Kristallnacht in each of its six sides and a beautiful shul that stands in Romania today in the center of the Magen David, rep- resenting the survival of the Jewish people. I wanted to emphasize the fact that despite the death and destruction we’ve faced, Hashem always sustains us and enables us to continue to practice Judaism no matter how many times our enemies try to destroy us.” Reflecting on the tremendous loss and unspeakable devastation during the Holocaust, Rivka Lavian conveyed that the project inspired her “to be grateful for all of her possessions and to be appreciative for everything.” The tremendous koach of tefillah was another theme. Geulah Pinchasov explained that she and her partner, Batya Katayev, chose to make a replica of the Zabludow Shul and decided to paint it entirely gold to reflect the purity and kedusha of the shul and the many tefillot that were recited there. Aviva Keller added, “Tefillah is not only something that connects us to Hashem, but it connects us to all past and future generations because tefillah is the link in our mesorah and no matter how buried in sins we are, we can always speak to Hashem.” All of the participating students are to be congratulated for their hard work, unique abilities, and for enriching our understanding of the competition themes. The Jewish Home | FEBRUARY 7, 2019 456 Central Avenue, Cedarhurst NY 11516 516.791.1925 Sunday 11-6, Mon-Thurs 11-7, Friday 10:30-1:30 Home of the Famous Non-Iron Shirt Save Your Money! All Men’s Regular and Slim Suits FREE! BUY 1 GET 1 SUIT SALE Boys Suits 2 for $220 Portly Suits 2 for $349.99 4 DAYS ONLY! Thursday, Friday, Sunday, & Monday only! 2/7/19 - 2/11/19 BALTIMORE • BROOKLYN • CEDARHURST • CHICAGO LAKEWOOD • MONSEY • TORONTO Sale end 2/11/19. Must mention this ad. Not valid with any other offer, special or discount. We are not responsible for typos. All prices are subject to change without notice. Other restrictions may apply. 57 58 FEBRUARY 7, 2019 | The Jewish Home Around the Community Bais Yaakov of Queens Champions at Spelling Bee Y asher koach to Abby Harris, a Bais Yaakov of Queens 8th grader, who won the Jewish Education Project Spelling Bee. After achieving first place at the school’s 4-8 grade spelling bee, Abby represented BYQ and was the champion! She will continue in the next round at the Daily News Spelling Bee. We wish her much hatzlacha. Innovation for Shulamith Girls at CIJE-Tech Hackathon F or those looking for a tech challenge, look no further than the CIJE-Tech Hackathon. Just ask Basya Vishnepolsky, Ariella Fohrman, Aliza Weiss, Leah Chaya Gluck or Chavi Feldman, the tech-savvy students that represented Shulamith High School at the CIJE Hackathon, and they’ll tell you that the program definitely pushed them to think outside the box and flex their creativity. Last Wednesday, Jan 30, at 10 AM, the girls arrived on the Yeshiva University campus for The Center for Initiatives in Jewish Education’s first annual Hackathon. Along with teams from various other schools, the Shulamith team sat and listened as they were addressed by The Jewish Home Family, an organization that provides care for the elderly. It was The Jewish Home Family that presented each team with the challenge that they would have to solve. The organization described four different real-life scenarios that each pose a unique challenge to an elderly person. The students’ challenge was to come up with a tech-based solution. For example, one scenario depicted an independent elderly man who wants to live at home. However, he tends to be forgetful, especially when it comes to taking his medicine. This makes living independently complicated. Is there a way he can safely live at home? The students’ solution couldn’t be theoretical, though. It needed to be tangible. The teams were required to design and build a technological product as well as create a website for its remote monitoring. Shulamith’s team tackled the chal- lenge by building a prototype for a device that dispensed pills automatically. At the judging event, Shulamith girls presented their solution which included a motor for the pillbox that, when activated, opened the slot in the pillbox to release the prescribed number of pills. Under the slot they included a scale which measured the weight of the pills, ensuring the correct amount was indeed dispensed. The scale also functioned as a sensor, the data from which was fed into a processing website created by the students. It recorded every weight the scale measured such that on a given day, this man’s doctor would know (by looking at the website) that the man had taken his pills and is managing independently. It’s pretty unbelievable that the girls managed to do all this in five hours. Shulamith students continue to think big and push limits. The Jewish Home | FEBRUARY 7, 2019 Around the Community Young Israel of Long Beach Annual Robert Chiger Scholarship Concert T he Young Israel of Long Beach will be holding their annual scholarship concert on Motzei Shabbos, February 23, 2019 at 8:00 PM. It will take place at the new Long Beach Hotel at 405 East Broadway, Long Beach, New York 11561. The concert will feature Uri Davidi, who will entertain the audience with his vast repertoire of popular and Chassidic Jewish music. Known as one of the most dynamic performers to have made their mark on the Jewish music scene over the last several years, Uri Davidi at chupp par- ents of three. The Young Israael of Long Beach is the cornerstone of the Orthodox Jewish Long Beach community and is led by Rabbi Dr. Chaim Wakslak. Rabbi Wakslak and Joseph Langer, a past shul president, conceptualized this scholarship concert over twenty-one years ago. Since that time, the proceeds from this scholarship concert has allowed many local community youngsters to attend yeshiva and/or overnight summer camps. Twelve years ago, following the untimely petirah of Robert Chiger, z”l, a young vibrant member of the congregation, it was decided that his memory would be most appropriately perpetuated by re-naming this scholarship fund The Robert Chiger Scholarship Fund. Bob had a unique connection to the youth of the synagogue and cared for their religious and character development. He was also a strong proponent of the YILB youth and sports programs. This concert is strongly supported each year by Beth Chiger and Neil Sambrowsky and Beth’s children, Michele & Eric Ehrenhaus, Andrea & Ariel Gantz, Elliot & Chana Chiger, and David & Rachel Chiger. General admission tickets can be purchased for $30. For further information or to order tickets please call (516) 431-9715 or look at the website. 59 60 FEBRUARY 7, 2019 | The Jewish Home Around the Community Rav Moshe Krasnow’s seventh grade talmidim at Yeshiva Darchei Torah recently celebrated a siyum at their rebbi’s home Rav Uri Orlian, rav of Shaaray Tefillah and father of one of the talmidim, was the guest speaker Tending To Our Seedlings in Gan Chamesh T hough Tu B’Shvat took place over winter vacation, Gan Chamesh, Chabad’s Early Childhood Center, took inspiration from this beautiful day to continue to explore Hashem’s miraculous natural world. The children enjoyed a unique Tu B’Shvat fair that promoted handson learning through real life experiences. The fair gave the children an בס״ד MUSIC PRODUCTION & STUDIO RECORDING 845.304.6635 Professional Audio Recording, Production, Song Arrangement, Mixing/Mastering MAKE YOUR OWN MUSIC Have you ever wished you can Þnd a way to share and express your talent with others? Create a music CD to share with family and friends at your Bar M i t z v a h , a n n i v e r s a r y, engagement, momentous family occasion. WE COME TO YOU! Enjoy Recording in the comfort of your own home! AN EXCELLENT OUTLET TO P O S I T I V E LY CHANNEL YOUR SON’S ENERGY IN A CREATIVE AND PRODUCTIVE WAY *References Available Upon Request opportunity to manipulate natural materials. The young students experimented with soil, seeds, twigs, acorns and pinecones in individual sensory bins. They enjoyed a “Create a Forest” station, replete with real tree stumps, leaves, branches and grass. The children experienced pattern-making with fruit kabobs and tree-making at the light table, among other exciting activities. The children gained a deeper appreciation of Hashem’s beautiful natural world and truly internalized the message of Tu B’Shvat. The fun continued in the classrooms as Box Week, an important part of the Gan Chamesh recycling initiative, coincided with the Tu B’Shvat fair. Toys in the classroom were replaced with boxes. Imaginations soared as the young students turned into engineers and architects while using the boxes to design elaborate play structures. It is these “out of the box” activities that continue to make the children in Gan Chamesh love learning. The Jewish Home | FEBRUARY 7, 2019 בס״ד THE JEAN FISCHMAN CHABAD CENTER OF THE FIVE TOWNS INVITES YOU TO JOIN OUR GALA DINNER SUNDAY, FEBRUARY 24, 2019 | 19 ADAR I, 5779 6:30 PM LOCATION COUVERT Gala Dinner Buffet Grand Ballroom of the Sephardic Temple $600 | Couple 7:45 PM Awards Presentation Lavish Viennese Dessert 775 Branch Blvd Cedarhurst, NY 11516 516.295.2478 ChabadFiveTowns.com/Dinner Honoring DR. & MRS. DR. & MRS. MR. & MRS. MR. & MRS. RUBIN & MANDY BRECHER MENDEL & ALIZA FUNDO GREGORY & IVY LEVI VITALIY & NATALYA MIKINBERG GUESTS OF HONOR GUESTS OF HONOR GAN CHAMESH PARENTS OF THE YEAR GUESTS OF HONOR Saluting our Teen Leaders FRIENDSHIP CIRCLE PRESIDENTS Meira Adler | Miriam Borenstein | Zahava Graff | Beckey Haviv | Illana Katz | Leah Katz | Julia Klayman | Esther Medows | Anabelle Muller | Gabby Nakkab Yishai Attias | Shuey Feierman | Jonah Naiman CTEEN LEADERS Dora Aronov | Rachel Bohnik | Ellie Cohen | Mia Gwirtzman | Jordan Myers | Evetta Poley | Justin Sajovits | Sasha Teplish EXECUTIVE DIRECTOR Rabbi Shneur Z. Wolowik DINNER CHAIRMEN Mr. & Mrs. Yosef Yitzchak & Penina Batsheva Popack Mr. & Mrs. Jeffrey & Shira Eisenberg Mr. & Mrs. Larry & Susan Sachs 61 62 FEBRUARY 7, 2019 | The Jewish Home Around the Community Communitywide Active Shooter Workshop F ollowing the horrific tragedy in Pittsburgh and rise in anti-Semitic hate crimes, community members from all different backgrounds came together for a life-saving educational evening, an NYPD active shooter training workshop. The event was coordinated by devoted community leader Councilman Donovan Richards (NYC Council D-31), chair of the Public Safety Committee. Chaim Leibtag, representing the White Shul administration, opened up the evening by thanking the NYPD for their protection and involvement. Chaim then welcomed everyone to the White Shul and introduced the executive director of the JCCRP, Moshe Brandsdorfer. Moshe spoke briefly, thanking Councilman Richards for expediting the workshop, a training that was in high demand and therefore had a long waiting list. Moshe remarked, “We applaud Councilman Richards and his staff for putting this workshop together in record time. We all Officer Gibbons of the NYPD Counter Terrorism Unit presenting at the workshop know that the further away we get from a tragic event the less serious people take the follow-up measures. The Councilman and NYPD recognized this factor, and we are most appreciative toward them.” The introductory speakers included Councilman Richards, Assemblywoman Stacey Pheffer Amato, who was a co-host of the event, and Deputy Inspector Vincent J. Tavalaro, Commanding Officer of the 101 Precinct. The Deputy Inspector spoke about taking proactive steps to ensure that your school, workplace and shul are safe. “We have units devoted to conducting security assessments for your facilities. Take advantage of these free services,” DI Tavalaro urged. Councilman Richards spoke about the strong relationship that exists between the local NYPD precinct and the community, one that everyone benefits from. The Councilman spoke about being proactive and vigilant with security measures, something everyone can do. Deputy Inspector Vincent J. Tavalaro, Commanding Officer of the 101 Precinct The interactive program was then presented by Officer Gibbons of the NYPD Counter Terrorism Division. Officer Gibbons covered many of the various scenarios and statistics surrounding active shooters in the past number of years. The officer presented many real situations, educating everyone on what to do if they are in an active shooter situation. Officer Gibbons also spoke about measures that can be taken to prevent an active shooter situation. For example, generally, each active shooter displayed 4 to 5 concerning behaviors over time that were observable to others around the shooter. The most frequently occurring concerning behaviors were related to the active shooter’s mental health, problematic interpersonal interactions, and leakage of violent intent. Being aware of these changes in behaviors can be very effective in thwarting an attack. The session ended with an interactive and lively Q & A session. Attendees included many members of the local Shomrim, RNSP, a community organization that works together with NYPD to ensure the community is a safer place to live. Elkanah Adelman, one of the RNSP coordinators, commented, “This training was super-informative. It touched upon natural reactions to dangerous situations and how one should act.” The community is very appreciative to the NYPD, 101 Precinct, and all of the political leaders that hosted the event including NYS Senator Joseph P. Addabbo (D-15th NYS Senate District), NYS Senator James Sanders Jr. (D-10th NYS Senate District) Assemblywoman Stacey Pheffer Amato (D-23rd NYS Assembly District), Congressman Gregory Meeks (D-5th Congress District), Councilman Donovan Richards (D-31st NYC Council), and his dedicated chief of staff, Manuel Silva. It is everyone’s hope and prayer that this workshop was conducted only for educational purposes and that the information will never need to be utilized. Councilman Donovan Richards speaking The Jewish Home | FEBRUARY 7, 2019 This Tuesday 9th Annual BO W L To Benefit Let the good times roll… Catering by Entertainment by Tuesday, February 12, 2019 Woodmere Bowling Lanes, 948 Broadway 8:00 PM - Registration (at Madraigos) 9:00 PM - Bowling Begins Strike a Better Future for our Youth For sponsorship opportunities, contact Rivka Lock at rlock@madraigos.org or call 516.371.3250 x102 PLY UP S THE EVENT SPONSORS $75 Registration Including 2 Games, Dinner, Shoe Rental, Swag Bag and Premium Giveaway Digital Pavilion G U YS 516.569.1500 EVENT COMMITTEE Chassia & Judd Boczko Tzipi & Yaakov Charlap Rena & Bentzy Chill Leah & Mendy Elefant Mala & Simcha Goldberg Malkie & Moshe Hirsch Fagie & Yudi Hochheiser 516.303.8338x6010 Aurora & David Mosberg Machy & Yanki Muller Devorah & Dovi Wachsler Yonina & Dovi Wisnicki CUSTOM MENSWEAR Register at BowlForThem.org 63 64 FEBRUARY 7, 2019 | The Jewish Home Around the Community The Magic of Science Research O n January 4, Mrs. Ruth Fried’s AP Biology and Mrs. Miriam Chopp’s AP Chemistry classes attended the Queens College Open House sponsored by the Garcia Center for Polymers at Engineered Interfaces. Professor Michael Hadjiargyrou, of the Life Science Department at NYIT, presented an intriguing lecture, “Will Tissue Engineering Supply Organs?” Senior Eliana Ellerton said, “The day began with a fascinating lecture on the work of 3D printing in the creation of usable organs. I had previously heard of technology like this existing, but to hear from a man who worked in this field on a day-to-day basis was incredible. Innovations like this will change the medical field as we know it, and it was really cool to listen to.” Our YUHSG Science Institute was well-represented by Michal Auerbach (‘14), who spoke eloquently about her summers of research experience at Garcia. The physics magic show highlighted at- Tu B’Shvat Shuk at YCQ mospheric pressure, sound waves and the particulate nature of light, and the girls enjoyed Non-Newtonian fluids and “playful polymers” (aka Silly Putty). Students returned to school excited and inspired by the potential inherent in scientific research. MTA Sophomore Lobbies in Albany By Sam Verstandig (‘21) M PHOTO CREDIT: ARIANA KALANTAROV & OLIVIA KANDINOV By Sarah Owadeyah O n Wednesday, January 16, the sixth grade girls at Yeshivah of Central Queens created a shuk for Tu B’Shvat. The girls worked hard learning and memorizing their songs about the different foods that were sold at the shuk. They also decorated boards and set up booths according to what food they had. The girls dressed up like waitresses with a white top and dark bottom and wore aprons and hats. All of the elementary students came to visit the shuk. The girls sang their song for all the visitors. Every student was given dollars and a passport in his or her classroom. When they arrived at the shuk they exchanged their American dollars for shekalim so that they could buy different foods and drinks at the booths that the girls set up. Aylin Soofirzadeh was part of the vegetable group. “I really enjoyed singing for the little kids,” she said. Eliyahu Babayev, grade 1, said, “I thought it was so cool that I felt like a went to Israel. And I liked learning about the different kinds of foods in Israel. I liked the fruits the best.” The girls’ morah explained to them about all the different parts of a shuk. The experience gave the sixth graders and the visiting students a great way to learn about what a shuk in Israel is like, while celebrating Tu B’Shvat. TAmember Carmen De La Rosa, Senator Robert Jackson, and Lieutenant Governor Kathy Hochul. He also attended a Senate session, which began with his father, Rabbi Stuart Verstandig (YC ‘80, FGS ‘83, RIETS ‘84), ac- With Senator Anna Kaplan tion,. The Jewish Home | FEBRUARY 7, 2019 Mesivta Ateres Yaakov The Yaffe Legacy Sefer Torah With tremendous gratitude to Hashem Yisborach Rabbi and Mrs. Mordechai & Devorah Yaffe cordially invite you to participate in a הכנסת ספר תורה in memory of their beloved parents אברהם שלום בן מרדכי ע“ה שינדל אסתר בת צבי הערש מרדכי ע“ה Yaffe מרדכי בן יהודה ע“ה רבקה בת אברהם ע“ה and the dedication of the Sabetai EHRENFELD ARON KODESH by Mr. and Mrs. Tzvi & Mindy Ehrenfeld Sunday, February 17, 2019 Guest Speaker: Rav Elysha Sandler, Shlita Mesivta Ateres Yaakov 131 Washington Avenue Lawrence, New York 11559 כתיבות אותיות 9:30 AM Mashgiach Ruchani of Shor Yoshuv, Rav of Kehilla Bais Yisroel Program Begins 10:45 AM Light refreshments will be served הכנסת ספר תורה 11:30 AM 65 66 FEBRUARY 7, 2019 | The Jewish Home Around the Community Central’s Ulpana Program Returns S ixteen sophomores returned last week after spending a month in Central’s Ulpanat-Tzvia Exchange Program in Israel. Their cohort was the biggest yet, and they returned with new understandings of Israeli culture and history, wonderful new friends, and an infectious enthusiasm about their experiences. As exchange students at Ulpanat Tzvia in Maale Adumim, the students had the unusual opportunity to live in a dormitory with their Israeli peers and join their Judaic studies classes, as well as to travel to the Old City, Har Herzl, and other important sites around the Jerusalem area. One participant, Leora Muskat, reflected, “Some of the best parts for me were the meals and free time with the Israelis. I got to be really immersed into their culture. I saw my Hebrew grow as my conversations with the Israeli students grew from just asking how many siblings they have to more meaningful conversations.” She added, “Just last night, an Israeli girl I had just met pulled me aside and said, ‘Leora, at Yisraelit (you are Israeli)!’ I was ecstatic – it was like a dream come true. I’ve always wanted to be Israeli and I guess after a month of living here, there is something Israeli ingrained in me.” Eliana Wachstock added, “Every night I would call my parents and tell them how grateful I am to be given this opportunity. It truly is once in a lifetime. Living in America, I would never experience living an Israeli life or have the opportunity to create bonds with all these new girls.” Both students also reflected, at length, on the deep meaning of learning and practicing Judaism in Israel, the warmth of their Israeli counterparts, and the fascination in being exposed to a completely different high school culture. Luckily, the experience is not over; some of the Israeli students from Ulpanat Tzvia will be joining Central this coming Sunday. Welcome back, sophomores! You were missed. Rambam Commemorates Warsaw Ghetto A pproximately 120 community members joined students in Rambam Mesivta in learning more about the detailed history of the Warsaw Ghetto. Together, they viewed the newly released film by Nancy Spielberg, “Who Will Write Our History,” which was brought to Rambam with the generosity of Mr. Larry Gordon and the coordination of both Dr. Alex Sternberg and Michele Justic. Rabbi Zev Meir Friedman introduced the movie by stating the importance of learning about our past. He quoted the words of Rav Yosef Dov Soloveichik in stating that a true Jew must be aware of the destiny of our people. “Knowing where you come from and the past travails of our people’s history forges who you are today,” said Rabbi Friedman. The documentary film focused on daily life in the ghetto and included actual archival footage taken during that time. Tragically, the pictures and video footage were taken by the Nazis with the express purpose of creating a narrative which would justify the murder of our people. Images of well-dressed women were juxtaposed next to starving children to demonstrate the callousness of the Jews. In all cases, these images were posed and forced upon us by Nazi henchmen. Warsaw, which contained 350,000 Jews and comprised 1/3 of the city’s population, became the center of the Nazis’ plan to exterminate the Jews of Poland. Through the forced resettlement program, the city was divided into sections, and 500,000 Jews were forced into closed, oftentimes uninhabitable living quarters. Deportations from the ghetto sent Jews to Treblinka and other killing camps. A handful of courageous Jews led by Dr. Emanuel Ringelblum recognized that the Nazis’ plan was the total eradication of the Jewish people. Ringelblum also understood the Nazis’ nefarious plan to justify their crimes and create an anti-Semitic narrative. Dr. Ringelblum and his group courageously recorded and docu- mented daily life in the ghetto; recording the hunger, beatings, the shootings and the eventual deportations of the half million Jews brutally confined to that small area. They called their group Oyneg Shabbos and took great care to hide all their writings and documents in metal canisters beneath buildings. After approximately 440,000 Jews were deported and exterminated, the remaining 60,000 Jews decided to fight against all odds against the Nazis oppressors. Using homemade Molotov cocktails and sheer will, the Jews courageously held out against the Nazi onslaught for 30 days. The Nazi murderers decided they would burn down the ghetto with flamethrowers and forcibly capture Jews hiding in the sewers. Dr. Ringelblum himself was betrayed by the Polish police and murdered by the Nazis. Most Jews who worked with Dr. Ringelblum were murdered and only two survived. Post-World War II, based upon the information of Hersh Wasser who actually hid the canisters, the documents were unearthed from beneath the rubble of Warsaw. The documentary paid tribute to the Jews by recounting the tragedy in their own words. Community members who came to see the movie included Rabbi Herschel Billet and Rabbi Yitzchok Goodman. Mrs. Irene Hizme, herself a survivor of Auschwitz and a victim of Mengele’s horrible experimentations on twins, was also present. Concomitant with this assembly, a group of approximately 35 students met privately with Rabbi Yotav Eliach who gave them an important synopsis of the rise of Nazism, the Wannsee Conference, and an overview of the Holocaust. He spoke about Jewish physical and spiritual resistance and provided a platform for students that felt that the documentary being shown might be too graphic and jarring. The mood was somber as students and community members exited the building. Those present understood the responsibility and burden of Zachor. 67 The Jewish Home | FEBRUARY 7, 2019 בס״ד frankels =savings my favorite service at my favorite prices my favorite grocer 1913 Cornaga Avenue • Far Rockaway T. 718-327-4700 F. 718-327-4701 E. orders@Frankelskosher.com 68 FEBRUARY 7, 2019 | The Jewish Home Around the Community MTA Welcomes Yeshivat Makor Chaim Talmidim “Kaytzad mirakdim…” R’ Yehudah Deutsch had the boys of Learn & Live dancing with laughter with his presentation of merakeid last Sunday. This coming Sunday iy”H “Need 2 Knead” will be presented. For more information regarding L&L/Pirchei of Far Rockaway, please email learnandlivefr@gmail.com or try the L&L hotline 641-715-3800, pin 932191#. Senator Felder Saves Taxpayers’ Money S enator Simcha Felder led a successful effort to increase the NYS Child and Dependent Care Tax Credit. Passed as part of the 2017-18 New York State Budget, the increased tax credit is now available for the 2018 tax year to help New York taxpayers with their childcare expenses. 200,000 New York families are expected to qualify for the increased tax relief saving them Senatorannually. Felder Saves Taxpayers’ Money a combined $47 million Senator Simcha Felder ledfamilies a successful effort to increase Child andso Dependent Care Tax “Many New York have more than the twoNYS children, why should the Credit. Passed as part of the 2017-18 New York State Budget, the increased tax credit is now tax credit stop at two?” said Senator Felder. “By expanding the child care tax available for the 2018 tax year to help New York taxpayers with their childcare expenses. credit, we are it expected easier for middle-class parents to work raise 200,000 New Yorkmaking families are to qualify for the increased tax relief savingand them a a family in$47 New York.” combined million annually. Senator Felder’s legislation expanded the tax credit in two important ways. “Many New York families have more than two children, so why should the tax credit stop at Under theSenator new parameters, the tax cantaxnow cover upmaking to 5 children, two?” said Felder. “By expanding thecredit child care credit, we are it easier forinstead of just 2, and increases the expenses can claim. The average credit middle-class parents to work and raise a family in Newyou York.” amount nearly doubled for middle class families. M TA is excited to welcome a group of talmidim from Yeshivat Makor Chaim (YMC) in Israel. MTA’s unique Makor Chaim Israel Exchange Program enables a group of MTA sophomores to spend a month learning at YMC and a group of YMC juniors to spend a month learning at MTA. The YMC talmidim infuse MTA with the ruach and spirituality that their yeshiva is known for. Throughout the month, they can be found greeting MTA talmidim with live music, singing, and dancing. They also host a Likrat Shabbat celebration every Thursday night, where they help the entire yeshiva get into the Shabbos spirit, in addition to bringing interactive Torah learning to every shiur. YOSS ECC Learns all about Brachos Senator Felder’s legislation expanded the tax credit in two important ways. Under the new parameters, the tax credit can now cover up to 5 children, instead of just 2, and increases the EXAMPLE: NYS Child & Dependent Credit expenses you can claim. The average credit amount nearly doubled for middle class families. for Family w/ 5 Children ($9,000 in expenses) NY AGI $15,000 $50,000 $75,000 $150,000 Credit Amount Prior to Changes $2,310 $1,194 $240 $240 After 2017 Changes $3,465 $2,092 $1,080 $1,080 Difference $1155 $898 $840 $840 Taxpayers are eligible to claim the Child and Tax Credit Taxpayers are eligible to claim the Child and Dependent CareDependent Tax Credit toCare help offset the coststo help offset costs caring child under of 13, a disabled spouse of caring for athe child underofthe age of for 13, aa disabled spousethe or aage disabled dependent. or a disabled dependent. “As every parent knows, children can getchildren very expensive, The expansionvery of “As every parentraising knows, raising can very get quickly. very expensive, this tax credit allows parents to keep more of their hard-earned money, period,” concluded quickly. The expansion of this tax credit allows parents to keep more of their Senator Felder. hard-earned money, period,” concluded Senator Felder. To learn more eligibility and claims and you can visit the New Yorkvisit Statethe Tax New Department’s website To learn more eligibility claims you can York State Tax at. Department’s website at. S tarting with a grape juice taste test, followed by comparing different breads and grains, the Pre-1A boys at the Early Childhood Center at Yeshiva of South Shore are enjoying a month-long Brachos Experience. Over the next few weeks, the boys will cook, bake, taste and learn all about each of the brachos, with an exciting Brachos Hunt at Gourmet Glatt to end the whole unit. The Jewish Home | FEBRUARY 7, 2019 Order Online: or by Email: orders@Frankelskosher.com Feb 6, - Feb 12, 1913 Cornaga Avenue • Far Rockaway • T. 718-327-4700 F. 718-327-4701 E. orders@Frankelskosher.com Gefen Cup A Soup Y THURSDA A & SUND Y ONLY Chicken Flavor flower arangments By The Case $6.99 Limit 3 - W/P Of $125 Reg Item Meat Cholent NEW! Kedem Normans 13 Oz Reg - Light 64 Oz All Flavors 5.3Oz Animal Crackers Concord Grape Juice 2/$4 $3.99 Limit 3 Of Each Flavor Chuck Eye Roast Greek Yogurt 5/$5 Glatt Kosher Silver Tip London Broil Pepper Steak $7.99Lb $7.99 Lb Beef California Steak Family Pack Ground Beef Chuck $6.99 Lb $4.49 Lb Skinless Chicken Drumsticks Top Quality Meat & Poultry Boneless Chuck Fillet Steak Thin Sliced Skinless Chicken Thighs $2.99 Lb Family Pack $0.99 $6.99 Lb $6.49 Lb Coffee Mate Aarons Aarons Chicken Cutlets Olive Oil Red Grapes $1.49 Ea $.99 Ea $2.99 Lb La Yogurt J&J Whipped Cream Cheese Fresh & Healthy Or Hatov All Flavors 6 Oz Shredded Cheese All Varieties 32 Oz $3.99 Freunds All Flavors $3.99 $2.59 $3.99 $3.99 Pardes B’gan Dr Praeger Ta’amti 24 Oz 24 Oz Gefilte Fish Tilapia Fillets 20 Oz 16 Oz All Flavors 52 Oz Breaded Cauliflower California Veggie Burger Potato Bourekas 12.2 Oz $6.99 4/$6 Jason General Mills All Flavors 33. Oz 6 Oz Reg -Flavored 24 Oz 10.8 Oz Bread Crumbs $2.99 Wise 4.05 Oz Dipsy Doodles/ Onion Rings/ Bbq Honey Potato Chips Small 2/$3 5/$1 2/$5 Frankels Perfection 7 Oz 100Ct 7Oz BBQ Corn Nuts $0.79 $4.49 $4.99 $6.49 Satmar Apple Kugel / Apple Cherry / Bluberry Gala Medium $4.49 bakery Plastic Cups $2.69 Chopped Liver Satmar 7 Oz Honey Nut Cheerios $2.99 $3.99 Thursday & Friday Satmar Pam Canola Oil $6.99 deli 6 Oz 3/$3 Fudge Cookies $.69 Ea Cantaloupe $7.99 Chunk Light Tuna In Water Mild-VirginEx Light 33.3 Oz Best Health Lieber’s $4.99 Bunch Kohlrabi 2/$1 Gefen 2/$3 2/$1 $3.99 Chef Tzali $3.99 14.1 Oz Seltzer Avocado Tree Ripe Orange Juice Cauliflower Florets $4.99 Gefen Cut Hearts Of Palm Wide - Med - Fine Ex Fine 12 Oz Cubed Sweet Potato 8 Oz Nestle Corned Beef $4.99 $15 10 Oz Glicks Noodles $.49 Lb All Flavors 5.3 Oz grocery Gefen Squash Wholesome Yogurt Bone In $2.99 Lb $5.99 Extra Clean Thin Classic Frankfurters Blooms & Buds Bouquet Wednesday, Feb/6/19 thru Friday Feb/8/19 Fresh Cut Oranges Normans $8.99 Lb Marinated Boneless Spare Ribs $20 dairy - frozen Boneless Butter Steak $6.99 Lb Shabbos Bouquet 3 DAYS ONLY Butternut Squash Sticks $5.99 Lb meat department $12 PRODUCE SPECIALS WEDNESDAY THRU FRIDAY Thursday Night major deals Lieber’s Tulips produce Free Tasting Cooked on premises Sun: 7-7 Mon: 7-8 Tue: 7-8 Wed: 7-9 Thur: 7-11 Fri: 7-2:30 Store Hours: my favorite weekly specials Beigel’s Sterns Frankels 2 Lb Choc/Vanilla/ Cinnamon 14 Oz Medium Rye Bread Rugelach $2.99 Limit 1 Case Heimishe Challah $4.49 $3.99 take a peek at our everyday low prices Cholov Yisruel Cream Cheese 8 0z Hatov/Givat/ Fresh & Tasty $2.99 We now have live delivery updates via text. Text # to (347) 689-6900 for your delivery status. Norman’s Taste Yogurts 5 0z $.59 Gevina Greek Yogurts 4/$5 Norman’s Cream Cheese Fresh & Tasty Milk Cholov Yisroel $2.99 2/$4 Free Fresh While U Shop Thursday Night 2 Pies Cholent W/ Kishka Cholov Yisroel Cottage Cheese Hatov/Givat/ Fresh & Tasty $2.99 Pizza $21.99 Mehadrin Chocolate Leben 12 Pk $10.99 Fresh Full Pan Potato Kugal $48 American Cheese 108 Slices $14.99 Givat Yogolite Postiv Fresh Fruit Platters Fresh Salmon Fillet $5.99 + tax $8.99 Seltzer 24 Oz $0.65 All Flavors 33.8oz Best Health Romaine Lettuce Greenhouse Grown $10.99 Family Pack Weekly Yiddish Newspapers & Magazines Der Blatt, Der Yid, News Report, Der Blick, Dee Voch, Etc. Specials Are Running From Wednesday Feb/6/19 Thru Tuesday Feb/12/19. Produce Sale Effective Feb/6/19 Thru Feb/8/19 We Reserve The Right Tp Limit Quantities, While Supplies Last. Not Responsible For Typographical Or Photographic Errors. No Rain Checks. 69 70 FEBRUARY 7, 2019 | The Jewish Home Around the Community Pruz at Gesher: Bringing in the Adars B’Simcha T he children of the Gesher Early Childhood Center were treated to a very special treat in honor of Rosh Chodesh Adar I. World famous singer Michoel Pruzansky led the students and faculty at this month’s super lively Rosh Chodesh assembly. The simcha was palpable, with each class excitedly dancing in circles with their morahs. The children were given glow glasses and glow sticks, and the neon atmosphere was sparkling. Halfway through the concert, Michoel Pruzansky asked the children to sit and then led them in a heartfelt rendition of “V’zakeini,” the mother’s tefillah to see her children shine ut- tered each week when lighting Shabbos candles. The waving glow sticks overhead in the darkened room created a very moving scene. The concert continued with more dancing and was followed by the monthly birthday celebrations that are typically shared at the assembly. Gesher would like to thank the Shapiro family for organizing and sponsoring this most memorable Rosh Chodesh event. Thanks are also extended to Mr. Ari Bauman who accompanied Michoel Pruzansky on the keyboard. Looking forward to zman full of simcha. o ys to d a w t n nie conve cleaning. 3 w o at N ry op off r d your d d n pa The Jewish Home | FEBRUARY 7, 2019 71 Around the Community HAFTR Middle School celebrated Davener of the Month awardees with a special breakfast. Each student was also presented with a certificate. We hope that they continue to pray with the same heartfelt kavana throughout all their lives. Protecting Long Island: Senate Democrats Fight Against Offshore Drilling S enate Majority Leader Andrea Stewart-Cousins joined Chair of the Senate Environmental Conservation Committee, Senator Todd Kaminsky, and members of the Senate and Assembly Majority Conferences to announce support for historic legislation to ban oil and natural gas drilling in New York’s coastal areas. The legislation (S.2316), sponsored by Senator Kaminsky, will protect Long Island and New York from the Trump Administration’s offshore drilling expansion efforts. The Senate Majority will pass this critical legislation on Tuesday, February 5. “The Senate Majority will not stand by as the Trump Administration plans to drill off Long Island shores,” Senate Majority Leader Andrea Stewart-Cousins said. “Long Island’s natural resources and communities’ quality of life are under threat. I applaud.” Senator Todd Kaminsky, bill sponsor and Chair of the Senate Environmental Conservation Committee, said, “It is essential for tomorrow that we protect our planet today—especially the vital natural resources here on Long Island and across our state. We are taking two big steps toward a more sustainable future for New York. I am proud to work with Leader Stewart-Cousins and our Senate conference on prioritizing and protecting Long Island’s environment. Offshore drilling is dangerous to the local environment, and would reverse progress toward our climate change goals—it must be prohibited. And by curbing the over-fishing of menhaden, we are conserving an extremely important species in our Atlantic Ocean ecosystem.” The legislation advanced by Senator Todd Kaminsky and supported by the Senate Democratic Majority will update New York State’s decades-old laws regulating oil and natural gas drilling. Specifically, this legislation prevents conveyances, leases, and acquisitions of land for offshore oil and gas. Senate Deputy Majority Leader Michael Gianaris said, “Protecting New York’s coastal communities is critical to our future. We must value our natural resources and put people before energy company profits. Banning off-shore drilling will do just that.” Senator Joseph P. Addabbo Jr. said, “I am proud to stand with my colleagues in advancing this vital legislation to prohibit oil and natural gas drilling off of New York’s coast. Permitting this kind of misguided and potentially disastrous drilling would endanger our environment, and threaten fish and wildlife, while negatively impact tourism and the economy in coastal communities. At worst, it could result in potentially devastating spills and widespread irreversible contamination of our waters, beaches, homes and businesses located on or near the waterfront. It’s time to just say ‘no’ to this ill-advised activity in New York State.” 72 FEBRUARY 7, 2019 | The Jewish Home Around the Community HANC Spelling Bee Glick’s amazing workshops combine skill and talent for your Glick Guy or Glick Girl. The kids enjoyed recent shows and magicians to make Sunday even more fun O n Monday, HANC’s Samuel and Elizabeth Bass Golding’s Elementary School held its annual schoolwide spelling bee. Under the leadership of Mrs. MaryAnn Harold, the fourth, fifth and sixth grade students participated in a class contest from which two winners were chosen in each class. These children demonstrated their spelling skills at Monday’s “School Championship.” As the champion spellers showcased their exemplary abilities, they were cheered by their classmates. Even after a word was misspelled, the disappointed contestants were met with applause and hugs of encouragement. The participants were: fourth graders, Jamie Blass, Sam Edery, Yosef Chaim Hilsenrath, Aiden Jerome, Ilana Nenner, Jenny Schwartz and Ruby Tilis; fifth graders, Kayla Brukner, Stephanie Macagno, Jonathan Paimony and Tara Sebbag; and sixth graders, Shoshana Eisner, Matan Galanti, Azi Goldstein, Emily Mark, Mark Rosenstock and Jaden Stavish. After an admirable job on the part of all the contestants, it came Spelling Bee winner Kayla Brukner with Mrs. MaryAnn Harold down to a hard-fought duel between Jaden Stavish and Kayla Brukner. However, in the end, only one could be declared the winner, and it was Kayla Brukner who will move on to the next phase and potentially participate in a Regional Spelling Bee. We wish Kayla much success in the upcoming competition. It was heartwarming to see how all the participants held their own and were so supportive of one another. Congratulations to all of the contestants for a job well done! Genetics Research Fair at Shulamith L ast week, 11th and 12th grade genetics students at Shulamith High School for Girls participated in a Genetics Research Fair, presenting their term projects for faculty members and peers. Throughout the term, under the guidance of their teacher Ms. Tzivia Brandwein, students experienced the process of building an academic research paper, interviewing experts and consulting with primary resources. At the fair, students showcased their findings, sharing developments and horizons in areas such as neurogenetics, molecular genetics, immunogentics, cancer genetics and medical genomics. The Jewish Home | FEBRUARY 7, 2019 73 74 FEBRUARY 7, 2019 | The Jewish Home Around the Community An azkara was held in Eretz Yisroel for Rabbi Chanina Herzberg, zt”l, last Motzaei Shabbos. Ahron Herzberg, R’ Avi Lieberman, R’ Chaim Avrohom Weber and Rabbi Moshe Shonek spoke about Rabbi Herzberg. Siach Yitzchok’s Shalsheles Melava Malka T he Shalsheles Melava Malka at Siach Yitzchok – an opportunity to focus on the transmission of our mesorah from one generation to the next – was a beautiful affair. Students, alumni, fathers, grandfathers, and rebbeim joined together for this memorable occasion that took place in TAG Elementary’s main dining hall. The crowd was first addressed by Rabbi Schon. He related that the Seforno tells us of a special segulah. When grandparents learn with their grandchildren, the grandchildren grow up to be successful. Reb Dovid shared a story of the roshem that a father can have on his children. Rabbi Feifer, rav of the Agudah of Bayswater and a sought-after speaker for Siach Yitzchok events, emphasized the incredible power of tefillah. He pointed out that if a person davens well, Hashem will answer – and will even give the person things that he didn’t ask for. A few stories illustrated this concept Heartfelt, inspiring niggunim and spirited dancing were, as always, key features of this event. Aside from the wonderful memories, each student also received two gifts. One was a “standard” melava malka door prize: a double-sided frame with pictures of Reb Moshe Feinstein and Rav Steinman to in- spire the boys to learn lessons from these gedolim and incorporate them into their lives. The second was photo magnets; a photographer took pictures of each family and the magnets were produced on the spot so that the boys were able to bring them home that very night. May we be zocheh to many more Siach Yitzchok events of hisorirus. A special thank you to parents Mr. Aryeh Pinchasov for supplying the food and Mrs. Soberman for coordinating the many details of the event and coming early to set up after Shabbos. The Jewish Home | FEBRUARY 7, 2019 75 Around the Community PHOTO CREDIT: IVAN H NORMAN Baruch Hashem L’Olam: An Innovative Avos U’Banim Program R abbi Leib Geliebter Memorial Foundation, renowned for innovative Holocaust education programming for children and adults, has launched a new intergenerational study program “Baruch Hashem L’Olam” for the works of the legendary Gaon of Plotzk, Moreinu HaRav Aryeh Leib Tzintz, zt”l (1768-1833). The Maharal Tzintz was a Gaon and Tzaddik about whom, Rav Akiva Eiger, zt”l, wrote: “One of the greatest of our times.” The Maharal Tzintz promised to be a meilitz yosher in Shomayim on behalf of anyone who reprints and studies the 100 seforim he authored! “We have fulfilled the first phase of our mission by recording the stories of the survivors and perpetuating their testimony to future generations” says Dr. Joseph Geliebter, the director of the Foundation. “Now we are focusing on the future, ensuring that our children and grandchildren connect to our Gedolim from the past. The foundation is a pledge to the generation that was lost; we will not for- get them, and we will not forget their dreams” The approach of the foundation is especially poignant in that it is disseminating the Maharal Tzintz’s Torah to Jewish children, giving the Maharal Tzintz the “children” he never had. This project is for the aliyas neshamos of the one and a half million Jewish children who were cut off in the prime of their lives and killed by the Nazis, yimach shimam, and will be a way of reclaiming the lost potential of these children and the Torah that they could have learned. At Sinai, when Hashem asked who is going to guarantee that you will keep the Torah, we answered, baneinu areivim lanu – our children will be our guarantors. Thus Avos Habanim is enshrining this as our guiding principal. Baruch Hashem L’Olam is named in honor of Bella Liba bas Rav Avrahom Mannis, a”h, mother of Dr. Joseph Geliebter, the director of the Foundation. Mrs. Geliebter, a”h, was a Holocaust survivor from the HaRav Yochanan Bechhofer and Dr. Joseph Geleibter with the Bostoner Rebbe, HaRav Yaakov Horowitz of Lawrence city of Pietrokov, Poland, and was saved by the Radoshitzer Rebbe, Rav Yitzchok Finkler, zt”l, who she was a bas bayis by after the passing of her father when she was only 12. Her whole life was dedicated to the principles of tzedakah, and she always felt that Hashem guided her in every point in her life; hence, Baruch Hashem l’Olam was her motto. In 1933, her father passed when she was just 12 years of age. She chose to leave the Bais Yaakov, sacrificing her own education, to help support her family, so that her older brother, the only son in the family, could continue studying in yeshiva. In the zchus of the Heiliger Maharal Tzintz, Rav Aryeh Leib Tzintz ben Moshe, yogen aleynu, we should have much hatzlocha. The project is already spreading from the Bostoner beis medrash across the USA and all the way to Eretz Yisroel. For more information, call 718338-0679, email info@yizkereim. org or visit. Yeshiva Har Torah Fourth Graders Tackle Bullying A s part of a yearlong focus on empathy, Yeshiva Har Torah’s fourth graders created and performed an original show about how students can deal with potential bullying situations. The show was the culmination of StandUp/SpeakUp, a drama-based bullying prevention program by Envision Theater. The production started as a series of workshops in which fourth graders discussed how to approach difficult bullying situations. The students brainstormed various solutions, problem solving in teams, and using dramatic role play. The students then wrote a play based on those ideas, using directed drama activities and creative writing to dramatize and teach other students how to be an upstander when they see someone being bullied. According to Rebecca Lopkin, Envision Theater’s founder and artistic director, Envision is “thrilled to be partnering with Yeshiva Har Torah. We developed StandUp/SpeakUp several years ago to address the bullying issue in schools. Under the guidance of Envision Theater’s Teaching Artist, Jen Winkler, students use this safe space to take risks, ask questions, and practice being an up-stander. These students can now use these real-life skills on the playground, bus or anyplace they may see bullying.” The workshop is part of YHT’s comprehensive, homegrown middot curriculum, created by Principal Ms. Pesha Kletenik. This year’s theme is #YHTyouandme with a focus on empathy and friendship through monthly projects, and Jewish and General Studies lessons. Weekly “Table Talks” are sent home to help parents guide discussions connected to the middot lessons at school at their Shabbat tables. Assistant principal Sara Duani noted that the program is a natural fit with YHT’s curriculum. “Teaching empathy is a core value at Yeshiva Har Torah and using drama through our partnership with Envision Theater was a way to bring a different approach to that.” 76 FEBRUARY 7, 2019 | The Jewish Home Around the Community Protein Bracelets at Shulamith S hulamith High School 11th and 12th graders in Mrs. Gross’s nutrition class explored amino acids and the proteins they make by creating protein bracelets. Chevi Charlap, current Shulamith senior, said, “We really enjoyed discovering how 20 monomers can yield hundreds of thousands of different types of polymers. It was so cool to see proteins come to life.” After the project was completed, the students exchanged and gifted each other protein bracelets based on their functions, such as health, support, strength and wellness. Debate Stars at MSH T he MSH Debate Stars’ first debate of the year, hosted by North Shore Hebrew Academy, discussed the topic of “Citizens United v. FEC,” a Supreme Court court case involving indirect corporate contributions to political campaigns, and if it should be overturned or not. The MSH Debate Stars spent weeks researching the case and working on presentations for their respective sides. Senior Penina Spearman and junior Esther Conway sided with the affirmative, arguing that that the case must be overturned in order protect the people from corporate interests. Debate captain senior Sarah Spielman and sophomore Elisheva Conway, the MSH negative team, argued that the case must be upheld because it protects a corporation’s fundamental rights. After three intense rounds of debate against various yeshiva high schools, the MSH negative team was victorious, winning the award for second best team of all the teams at the competition. Sarah Spielman won Best Overall Speaker, and Elisheva Conway won Second Best Overall Speaker. The topic of the second debate of the year was something on all students’ minds: should a school week be four days instead of five. The negative team, sophomore Elisheva Conway and junior Staci Steinfeld, and affirmative team, seniors Penina Spearman and Eliana Hirsch, were completely undefeated! Both teams won both of their rounds. The negative team bested SAR and KTA, and the affirmative team was successful against Flatbush and KTA. We are so proud of the hard work and preparation all the girls put in to achieve these impressive results and to faculty advisor Mr. Ira Schildkraut for all his work in helping them! The Jewish Home | FEBRUARY 7, 2019 Teach and Inspire TRAIN TO A Unique Opportunity Kollel Emek Hamelech is a unique institution for training serious and motivated young married Talmidei Chachamim for careers in Rabbinic leadership, Jewish Education, Kiruv, and other means of enhancing Limud HaTorah and Kiyum Hamitzvos. The general Derech Halimud, as well as the teaching and speaking skills taught, are patterned after the powerful and inspiring methods of Rav Weinberger Shlita, Mara D’Asra of Aish Kodesh, in Woodmere, New York. For nearly three decades the Rav’s approach has been an amalgamation of traditional Torah sources with classic Sifrei OUR HANHALAH Machshava and Chassidus. Rav Weinberger Rav Levin Rav Ginsberg Rav Rubenstein For those seriously interested in joining for Rav Zakutinsky the upcoming Elul Zman 2019, please submit an application: Emekhamelech.org 894 WOODMERE PL | WOODMERE, NY 11598 | EMEKHAMELECHINFO@GMAIL.COM 77 78 FEBRUARY 7, 2019 | The Jewish Home Around the Community Yeshiva Gedolah of the Five Towns 16th Anniversary Dinner A dinner is an appropriate time to reflect on the accomplishments of the past year. Equally important, it provides an opportunity to look ahead and assess what lies ahead. The Yeshiva Gedolah of the Five Towns is exemplary in what it has been able to accomplish thus far. However, it’s the still bright future that provides such excitement and reveals how much is still left to achieve. The Yeshiva remains committed to its founding principles of keeping a warm and close relationship with each and every talmid, yungerman, and community member who chooses to use its services. The mesorah of the rebbeim is felt in every act, connection and relationship with the talmidim and families. Since its inception, the Yeshiva has become a sought-after makom Torah for young men returning from learning in Eretz Yisroel. The Yeshiva is known for the high level shiurim given by the prominent rebbeim as well as for its energetic atmosphere and unique hashkafah. The Yeshiva continues to accommodate the increasing demand for students, and specifically those who dorm, due to the reputation it has earned locally and across the globe. The Yeshiva, however, is a multifaceted institution. In addition to having become a prominent makom Torah for its talmidim and Kollel yungerleit, it also serves the Five Towns community in various ways. The eruv is maintained by the Yeshiva, which includes sending people to check its status and fix it on a weekly basis, irrespective of the weather conditions. The Yeshiva provides numerous shiurim to baalei batim, ranging in topic and skill level, to accommodate the varied needs of its constituents. Its doors are open to all who wish to learn in its inspiring atmosphere. The Yeshiva’s yungerleit are available as chavrusas for interested baalei batim. Young boys and girls have a growing number of opportunities to come and share in the growth sought by all those who enter the Yeshiva’s doors. In the coming days at the Yeshiva’s 16th Annual Dinner, the greater community will once again come together to celebrate YGFT’s growing accomplishments and put faces on some of those who have graciously shared in the responsibilities towards facilitating its success. The three sets of hon- orees, Dr. and Mrs. Joshua Fox, Mr. and Mrs. Zev Hertz, and Rabbi and Mrs. Nesanel Snow, have contributed and continue to contribute towards the Yeshiva’s efforts in expanding the reach of Torah within our community in varied ways, through ahavas haTorah, ahavas chessed and mesiras nefesh. Guests of Honor – Dr. Joshua and Mrs. Shiffy Fox The strong kesher of Dr. Josh and Shiffy Fox, our Guests of Honor, with YGFT began when their son Yitzy joined the yeshiva. The relationship strengthened when their son Bentzy followed a few years later and eventually evolved into so much more. We proudly chose this remarkable couple as our guests of honor since their home is permeated with mesiras nefesh for Torah and dedication to the klal. Dr. Fox is the director of Advanced Dermatology and the Center for Laser and Cosmetic Surgery. He has developed a skin research foundation and his innovative techniques in dermatology have been published widely. Dr. Fox has served on the board of many mosdos, both locally and in Eretz Yisrael. He was president of Congregation Shaaray Tefila in Lawrence and is one of the four founders of Yeshiva Ketana of Long Island. Many do not know that he received semicha from Bais Medrah L’Torah in Chicago prior to becoming a doctor. Despite a busy schedule, Dr. Fox makes time to learn several sedarim throughout the week. Shiffy Fox has served as president of the women’s league of several institutions and chaired events for many Jewish causes. Mrs. Fox maintains a choson/kallah apartment in the Fox home, allowing young couples to get on their feet in the early stages of marriage. She uses her leadership skills to help manage her husband’s medical practice and supports her husband’s endeavors on behalf of klal Yisrael. Shiffy, an occupational therapist by profession, currently dedicates her time and energy to raising their beautiful children and grandchildren. The Foxes truly exemplify a couple steeped in chessed and dedicated to a life encapsulated in Torah. They are the proud parents of their four married children, as well as their children who are living at home. Their children are marbitzai Torah in Eretz Yisrael, Dallas, Monsey, and Far Rockaway. Dr. and Mrs. Fox are ardent supporters of the yeshiva and partner with the Yeshiva in the chinuch of their children. Yitzy continues to learn in Yeshiva Gedola while studying for smicha and pursuing his CPA. Bentzy learns in Yeshiva as well while pursuing a degree in psychology. We are proud to have such exemplary bochurim in our Bais Medrash. We are proud to have Dr. Josh and Shiffy Fox both as parents and loyal friends of the yeshiva, and we are grateful to have this opportunity to recognize this special family at this year’s dinner. Having accomplished so much, the Foxes maintain a very palpable humility, and we thank the Foxes for accepting the call at this year as Guests of Honor. Torah Leadership Award – Zev and Leba Hertz As years pass, it is often hard to maintain strong connections with all those who have entered an institution’s doors. However, when the identification is so strong to begin with, it makes the ongoing connection that much easier. Zev, who grew up in Miami and later moved to LA, accomplished a tremendous amount in his 3+ years in Yeshiva. After coming to the Yeshiva from such noteworthy establishments such as Ohr Sameach, Mir, Lakewood and Ner Yisroel, Zev found a home for himself in the Yeshiva. His hasmada and seriousness were assets for him and the Yeshiva during his tenure here. Zev continues to maintain his connection to the Yeshiva and its rebbeim but, more importantly, maintains that focus on learning and Yiddishkeit despite his busy work day in the family’s real estate business. Leba, who grew up in LA but also spent time at Yavne in Cleveland, is a staunch supporter of Zev’s learning. Her selflessness and passion allow the family to raise their children to the highest standards. Leba, when she is not busy raising their family, teaches in Toras Emes, where all of their children attend as well. The two of them are the leaders in their very quiet way in representing Torah and the impact it has in every aspect of their lives. To watch them is to be inspired by them. It is with great pride we present them with the Torah Leadership Award this year. Harbotzas Torah Award – Rabbi Nesanel and Adina Snow It is always rewarding when alumni move on and are so productive and integral to their new communities. It is, as much as anything else, a testament to the Yeshiva, its rebbeim, and overall environment. The Snow family, who settled in Woodmere after their time in the Yeshiva’s Kollel, is continually offering the Yeshiva opportunities to take pride in their accomplishments. Their collective dedication to the Yeshiva and the growth of Torah is an inspiration to us all. Nesanel learned in Yeshiva and excelled first as a bochur and later as an avreich in our kollel. As a young boy, he grew up in Woodmere, attending the South Shore Yeshiva, then Rambam followed by Ohr Yerushalyim, subsequently entering the Yeshiva Gedolah some 15 years ago. More recently, Nesanel moved on to work in the administration of the Vaad Hakashrus of the Five Towns, continuing his avodas hakodesh in a significant way. On top of that, he finds time and energy to run a morning seder in the Yeshiva before the early minyan, as well as a night seder program after a long day at work. It’s this type of dedication to learning and teaching Torah that has made him a standout in Yeshiva over the years. Adina (aka Feit) Snow grew up in Staten Island. After completing her schooling, she spends her day as a speech therapist in Crown Heights. If that were not enough to keep busy, she cares for their children whom attend Siach Yitzchok and Bnos Bais Yaakov respectfully, while also having a baby at home. Both Nesanel and his wife come from parents and homes that embody ahavas chessed and are machshiv Torah. It is certainly a zchus to have them in our Yeshiva and even more of a zchus to be able to recognize them this year. Please join us at 7:00 PM on Wednesday, February 13 at the White Shul in Far Rockaway, NY, as we recognize these special families for their tireless service and dedication to the yeshiva, their communities and klal Yisrael. For more information please call the Yeshiva office at (516) 295-8900x4 or go online to. The Jewish Home | FEBRUARY 7, 2019 79 30 80 OCTOBER 29, 2015 | The Jewish Home FEBRUARY 7, 2019 | The Jewish Home TJH Centerfold Football Withdrawal Syndrome Tips Useful tips to get over Football Withdrawal Syndrome Introduce yourself to the lady and children that you find inside your house...they are your wife and kids. Think about how long the off-season is. Now think about how long the off-season will be for Rams coach Sean McVay and count your blessings. When your kids talk to you, act like you are a football player talking to the media. Gather all the pairs of socks from behind the couch. (Make sure to wear your hazmat suit, please.) Invite your friend over and tell him that you want to hear all the stories he has been telling you on Sunday nights (i.e. about the time he was stuck in traffic for two hours; how his cleaners lost his favorite shirt, etc.) because they sounded so interesting during the final drive of football games all season that you just want to hear them again. Change the lightbulb that your wife has been “constantly” reminding you about every six months. Go to the gym and work off the 20 pounds of chili you ate during the season…so that when next season comes around you can put 20 pounds back on. If you are a Jets fan, meditate to the numbers 4 and 12… Because that was your team’s record last year and will likely be their record next year. Read a book. (Start with Dr. Seuss and work your way up to Curious George. By that time, the next football season will certainly be upon us.) Get into basketball and watch the Knicks. They will win at least 12 games this season…which wouldn’t be bad if they were a football team. Consider getting into alternative sports such as competitive eating. (You are probably a natural fit.) You gotta be kidding A mother and daughter are washing dishes while the father and son are watching football. Suddenly, there is a crash of breaking dishes and then complete silence. The son looks at his dad and says, “It was Mom.” “How do you know?” wonders the dad. The son responds, “Because Mom didn’t say anything.” The Jewish Home | OCTOBER 29, 2015 The Jewish Home | FEBRUARY 7, 2019 Facebook Trivia See what do you know about the $170 billion social media company that just turned 15 years old a. $1 b. $8 million c. $14 million d. $140 million a. $750 million b. $1.7 billion c. $4.3 billion d. $19 billion 4. According to a study, the average Facebook user has 150 friends. According to that Answers 1. B 2. What is Facebook founder and CEO Mark Zuckerberg’s annual salary? 2. A d. Albert Einstein 3. D c. Alexander Graham Bell 4. B b. Al Pacino 5. A a. Elvis Presley 3. Before co-founding WhatsApp, Brian Acton interviewed and was turned down by Facebook. He tweeted, “Facebook turned me down. It was a great opportunity to connect with some fantastic people. Looking forward to life’s next adventure.” Well, his next adventure, WhatsApp, was eventually purchased by Facebook. How much money did Facebook pay for WhatsApp? 6. D 1. When Facebook started, the site displayed a header image featuring a man’s face obscured behind the binary code. The identity of the man could not be seen clearly, but it later came to light whose face it was. Whose was it? same study, in a time of need, how many of those “friends” would the average user turn to? a. 0 b. 4 c. 19 d. 150 5. Why is Facebook’s dominant design color blue? a. Because founder Mark Zuckerberg is color blind and has a hard time distinguishing between other colors b. Because Mark Zuckerberg had a blue car when he founded Facebook and since it was the first photo he posted, he wanted the homepage to match the photo c. Facebook polled its first 1,000 users regarding which color to use and blue got 57% of the vote d. Because blue is the easiest color to see on a computer screen 6. A recent study in the Journal of Applied Biobehavioural Research found that the more time one spends on Facebook the more they become: a. Happy b. Popular c. Fulfilled d. Depressed Wisdom Key 5-6 correct: You spend a lot of time on Facebook...thumbs down for you. 3-4 correct: You only spend a moderate amount of time on Facebook...you must be only moderately desperate. 0-2 correct: You are probably not even on Facebook. What do you do all day? Read? Play ball? Play an instrument? Paint? Look at real sunsets? And do you mean to tell me that you actually talk to people? 31 81 82 FEBRUARY 7, 2019 | The Jewish Home The Jewish Home | OCTOBER 29, 2015 3 Torah Thought Parshas Terumah By Rabbi Berel Wein The Nozyk Synagogue in Warsaw E ven though the L-rd requires no building or special place in the universe that He created, the Jewish people are commanded in this week’s reading of the Torah to donate special materials and talented labor to begin the construction of such a building, where the spirit of the L-rd, so to speak, will reign. There have been many ideas advanced over the ages as to why such a building was ever necessary for a G-d that prohibits idolatry and is purely a spiritual entity. But this is not the subject of my few words for this Sabbath. Wherever the Jews have found themselves, in every far-flung corner of this world, they have always constructed houses of worship and of learning upon which to base their communal life and societal survival. Most of these buildings – those that remain and have not been destroyed by time, changing demographics or wanton evil perpetrated by humans – are no longer serviceable as synagogues, for the Jewish communities that once populated them are gone. So these buildings have become at best museums and in many, if not most cases, buildings now used for purposes other than Jewish worship services. Nevertheless, these buildings, even if abandoned or not used for their original purpose, stand as mute testimony to the loyalty of the Jewish people and their perseverance in the face of terrible odds and hostile societies. Many of have abandoned the synagogue and its worship service. Statistics in the United States, for instance, show that the highest proportion of any religious group in that country that does not attend worship services regularly are the Jews. What has resulted is the disintegration of the Jewish community in that country. Synagogues may be merely buildings constructed of bricks, cement, steel or wood. But buildings alone certainly do not guarantee any sort of Jewish future. Wherever these synagogue buildings existed, the Jewish community was able to bring forth generations and remain vital and productive. It is as though the Torah in this The buildings have become representatives of Jewish continuity and survival. these buildings are now visited by Jewish tourists, and some of them are even official national landmarks protected by the governments of those countries. They all stand as testimony to the onetime presence of a vibrant Jewish community that was determined to continue to worship G-d in its own way and according to its millennia old tradition. The buildings have become representatives of Jewish continuity and survival. One of the great tragedies of current Jewish life is that so many Jews week’s reading senses this truth and commands that such buildings be built from Jewish funds, talent and effort. The blueprint for a synagogue building is a very ancient one, and it also details what a synagogue should look like and for what purpose it is to be built and attended. The synagogues and their buildings that exist throughout the world are the signposts of Jewish existence and the eternal witness to the spirituality of its people. Shabbat shalom. The Jewish Home | FEBRUARY 7, 2019 83 84 64 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home From the Fire Parshas Terumah Making it a Habit of Hischadshus By Rav Moshe Weinberger Adapted for publication by Binyomin Wolf T he pasuk says, “You shall setup the Mishkan k’mishpato, according to its mishpat, as it was shown to you on the mountain” (Shemos 26:30). The Yerushalmi (Shabbos12:3) asks on the lashon of k’mishpato, according to its law, is there a mishpat, a specific law, for some pieces of wood? People have laws; wood and objects do not. The Yerushalmi answers that the beam that merited to be placed on the north side of the Mishkan would be marked and always had to be placed on the northern side every time the Mishkan was set up. So too, the beams that were placed on the south side of the Mishkan would always be placed in the south. The mishpat was that every beam had to always be placed on the side where it was set up at the initial building of the Mishkan. This teaches us k’vius, that there is a seder in the Mishkan. This Yerushalmi applies this l’halacha regarding a tallis. The Magen Avraham in Orach Chaim (68:6) brings in the name of the Shla HaKadosh that the source of the minhag to have an atara on a tallis comes from this din in the Yerushalmi. The purpose of the atara, which can either be some silver or an extra piece of material, is to make sure that there is k’vius by a tallis – that the tzitzis that belong in front, are always worn in front, and that the tzitzis that belong in the back will always remain in the back. We learn from the Mishkan the chashivus of k’vius. However, we see from the avodah of the kohanim in the Bais Hamikdash the very opposite of k’vius. We know from the Mishnah in Yomah (82) that there was a paiyyis, a goral, a lottery, every day to make sure that the avodah of the kohanim would not become kavua. We further know from the Mishna in Sukkah (5:6) that the same kohen who did the avodah on one day was not eligible to perform the avodah the next day. We see that this is the exact opposite of k’vius, and it was also in the Bais Hamikdash! This question is asked by the Chasam Sofer who wrote that the rule of k’mishpato is to establish each beam in its proper place because the idea of k’vius only applies to the physical structure, to the wood and beams of the Mishkan. We know that the physical Mishkan in this world corresponds to the Bais HaMikdash Shel Ma’alah, the Bais HaMikdash on high, where the avodah is performed by malachim, angels. We know that the angels are called omdim, meaning that they cannot change their role. They are fixed, kavua, in their roles. But regarding people in this world, things are completely different. There is a tendency in this world for people to grow accustomed to certain things and certain behaviors. Here the challenge for us is to avoid falling into a state of hergel, things should not become habit. As Hashem says through the Navi Yeshaya (29:13), “As the people have drawn close with its mouth and lips they honor me, yet they have distanced their heart from me and their fear of me has become like rote.” In order to prevent the pitfall of hergel, Hashem wanted a different kohen to be involved in each avodah by way of a daily drawing of the lots. This was done in order that the kohanim should merit to do the avodah with simchas ha’lev, joy, and hislavus, excitement. Aharon’s Greatness This was the greatness of Aharon HaKohein. As the pasuk states (Bamidbar 8:3) regarding the lighting of the menorah, “And so Aharon did… just as Hashem commanded Moshe.” Rashi, quoting the Medrash, says on that pasuk that the words, “Vya’as kein Aharon, and so Aharon did,” tell us that the praise of Aharon was that he was unchanging. The Gedolei HaChasidus were bothered by this Medrash. They asked why Aharon’s obedience to Hashem’s word is Aharon’s praise. If Hashem would have whispered words into our ears, we would all follow Hashem’s instructions without the slightest deviation. Why does Rashi bring this Medrash which says that Aharon’s unchanging fulfillment of this mitzvah was his praise? Aharon’s unchanging fulfillment of this mitzvah was a very high madreiga. The koach of Amalek is k’rirus, coldness. “Asher karcha ba’derech,” (Devarim 25:18). The koach of Amalek is to try to make us feel comfortable and accustomed to doing our avodas Hashem. Chazal say that nobody knew how to speak loshon hara like Haman HaRasha who said to Achasverosh (Esther 3:8), “There is one nation that is scattered and dispersed between the nations.” Haman specifically used the word yeshno, which is lashon shina, changing. Haman also used to word yeshno (related to shayna, sleeping) to imply that Hashem was sleeping and old and that the Jewish people do not excite Him anymore. The word shina is also related to the idea of repetition, as we see from the pasuk, “V’shinantem l’vanecha,” (D’varim 6:7). Now we can understand that the praise of Aharon was that despite the fact the he lit the menorah every day, he nevertheless was not impaired by the middah of Amalek, hergel; it never became a habit or stale. Each time he lit the menorah was with the same excitement as the very first time. The way of life is for a person to lose the excitement and to take things for granted. Imagine if every time we looked at our children, we would see them with the same excitement that we had when they were born. If only a husband would look at his wife with the same excitement he had when he saw her by the chuppa. We tend to grow accustomed to seeing our parents in the same way – they were there the moment we opened our eyes and continue to always be there for us. We should always try to think of our parents with renewed excitement and should not take them for granted. This was Mordechai Ha’Yehudi. He was called Ish Yehudi, a lashon of hoda’a, giving thanks. Mordechai took nothing for granted and was constantly thanking Hashem with renewed excitement for what he was given. R’ Yaakov Yitzchak from Lublin was known as the Yid HaKadosh. He earned his name because every day he was filled with renewal when he made the brach,a “She’lo asani goy.” That is what made him into the Yid HaKadosh – each day he was renewed in his service of Hashem. He felt he had just become a Jew. Once upon a time there was such a thing as shul Yidden. There were peo- The Jewish Home | OCTOBER 29, 2015 65 The Jewish Home | FEBRUARY 7, 2019 85 a chiyus! There is a story of an old Rebbe in Europe in a poor little shtetl who never left his Bais Medrash to go anywhere else. One day, the chassidim decided that they wanted to do Imagine if every time we looked at our children we would see them with the same excitement that we had when they were born. ple who stayed in the same shul for 40,50 and 60 years. They were in the same place, sat in the same seats and even had the same arguments year after year, but they were filled with something special for the Rebbe and send him and his Rebbetzin on a vacation. They collected money together and they came to the Rebbe and said, “Rebbe, we were thinking that it would be a good idea for you to go on a vacation.” The Rebbe listened and said, “You know, I was thinking the same thing!” He then picked his Gemara and sat down one seat over from his usual place, looked around, and said, “Aahhh! This is a mechaya!” That is what hischadshus means. It means being kavua while also finding renewal in that which is kavua. It means finding a new perspective on what you’ve always had, finding a new taste in old wine. The malachim in the Beis Hamikdash Shel Ma’alah above are omdim, they do not have to change. However, down here in this world, we have to constantly work on ourselves and always renew ourselves in order to prevent things from becoming old and stale. Rav Moshe Weinberger, shlita, is the founding Morah d’Asrah of Congregation Aish Kodesh in Woodmere, NY, and serves as leader of the new mechina Emek HaMelech. 86 20 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home Parsha in 4 Parshas Terumah By Eytan Kobre Weekly Aggada Speak to the Jewish people, that they should take for me an offering, from each man whose heart motivates him you shall take my offering (Shemos 25:2) Have you ever encountered an item for sale where its seller is sold with it? G-d said to the Jewish people, “I have sold to you my Torah, it is as if I have been sold with it.” As it says, “That they should take for me an offering” (i.e., it is as if G-d told the Jewish people to take Him). This is comparable to a king who had a daughter, an only child. One day, the prince of a foreign land came and married her. Then he wanted to take her to his homeland, a faraway country. The king (the father-in-law) said to the prince, “My daughter is an only child. To be without her, that I cannot do. To prevent you from taking her, that too I cannot do. But do me this favor: wherever you take her, leave one room for me so that I can come live with you, because I cannot be without my precious daughter.” So it was with G-d and the Torah. Premier Skirting By Beth Yudin Your Experienced Ful Service Event Your Experienced Full Service Event Company Custom table linens can dramatically transform your event atmosphere Each event is unique to us & we cater to all events Linens for rental and sale; Table & chair rentals Custom Layout Design, Logos, Pillows, & Placemats Full event planning services Free parking Hours: M-F 8:30AM-4:30PM | Evening Hours Available שומר שבת 516-239-6581 | Beth@premierskirting.com | G-d said to the Jewish people: “I have given you the Torah. To be without it, that I cannot do. To tell you not to take it, that too I cannot do. But wherever you go, leave a space for me so that I may come dwell with it. That was the Mishkan. As it says, ‘And let them make for Me a sanctuary, that I may dwell among them’” (Shemos Rabba 33:1). Weekly Mussar “the image of a child’s face.” It all depended upon their environs. While all people are affected by their surroundings, children are even more acutely so. All children have the capacity to be either “angels of destruction” or “the image of a child’s face” – it all depends on their surroundings. Weekly Anecdote And you shall make two golden cherubs, of beaten work you shall make them, at the two ends of the ark cover (Shemos 25:18) Speak to the Jewish people, that they should take for me an offering, from each man whose heart motivates him you shall take my offering (Shemos 25:2) Containing the luchos and placed in the Holy of Holies, the Aron was the focal point of the Mishkan. It was adorned on top with the keruvim – two golden cherubs – described as “the image of a child’s face” (Rashi, Shemos 25:18). An angelic, peaceful image, to be sure. But this is not the Torah’s only reference to keruvim. When G-d banished Adam from Gan Eden, the Torah describes how “G-d drove the man out, and he dwelled to the east of the Gan Eden, the keruvim and the fiery ever-turning sword, to guard the way to the Tree of Life” (Bereishis 3:24). There, too, the Torah speaks of keruvim, but they are described as far less innocent “angels of destruction” (Rashi, Bereishis 3:24; Chizkuni, Bereishis 3:24). So what were the keruvim: angelic cherubs or angels of destruction? In fact, the keruvim were both “the image of a child’s face” and “angels of destruction”—it depended on the environment. When the keruvim were given a “fiery ever-turning sword,” they were “angels of destruction”; when they were placed atop the Aron next to the luchos in the Holy of Holies, they were The donations of the Jewish people to the Mishkan are described famously as “takings” (rather than “givings”), because one who donates to charity really ends up taking, whether in the form of Divine reward or spiritual protection or financial return (see e.g. Alshich and Apiryon, Shemos 25:2). One man from a town near Kozhnitz found out firsthand just how rewarding charity can be. Our man made it a habit to collect charity for others, particularly those too embarrassed to ask for themselves. While he was ridiculed regularly by those he solicited, he carried on dutifully, knowing he was doing G-d’s work. One day, a desperate man came to him and poured out his soul. He could not afford to feed his young children or to pay for medication prescribed by doctors for his sick wife. He was at his wits’ end. But the collector was conflicted. On the one hand, this man was so destitute and so desperate, and he truly wanted to assist; on the other hand, he had already used up all his good will with the local townsfolk, having made three rounds of collection just the prior day. In the end, he simply could not ignore The Jewish Home | FEBRUARY 7, 2019 AISH KODESH DINNER 2019 03.10.19 ג׳ אדר ב תשע״ט THE SANDS ATL ANTIC BEACH R AV & REBBETZIN MOSHE WEINBERGER שליט״א The Nahum Gordon z”l Community Service Award MR. & MRS. CHAIM BALTER 516.374.8596 | DINNER@AISHKODESH.ORG | 87 88 FEBRUARY 7, 2019 | The Jewish Home the poor man’s plight. He prepared himself for the barrage of insults he was sure to receive, and he went out to start collecting. His first stop was the local used clothing shop. “You? Again?” the shopkeeper asked. “You were just here yesterday? Stop bothering us!” Several of the shopkeeper’s confederates were there too and decided to have some fun at the collector’s expense. “We don’t want to turn you aside empty-handed,” they told him. “So we will give you all the money you need, on one condition. Here is a priest’s garb. Don this and walk the streets of town without uttering a word. Do that, and the money is yours.” Well, our good-natured collector did as the pranksters demanded. As he walked the streets of town dressed like a priest, the locals mocked him and jeered at him and threw rotten vegetables at him. But he trudged on until, at long last, the torture was over. The pranksters handed him the money, which he promptly turned over to the poor man. And then, just like that, he collapsed and died. The decision was made to bury him in the priest garb. Sometime later, the collector’s grave was opened inadvertently. To the amazement of all present, his body was found fully intact and undisturbed. The townspeople feared that this man may have been one of the generation’s hidden righteous ones, and they regretted treating him so horribly. They asked the Maggid of Kohznitz to tell them if the collector was indeed one of the hidden righteous ones. “He was not,” answered the Maggid. “But the priest garb he used for that poor family years ago attained a certain sanctity because of the charity he collected, and it serves as a shield for him to this day.” Weekly Halacha And you shall set upon the table showbread before Me always (Shemos 25:30) The Jewish Home | OCTOBER 29, 2015 At each Shabbos meal, there is an obligation to recite a blessing over two whole loaves of bread (Shulchan Aruch, Orach Chaim 274:1; Rama, Orach Chaim 291:4). This obligation – commonly referred to as “lechem mishneh” (“double portion”) – recalls the double portion of manna that fell for the Jewish people in the desert each Friday, one for Friday and one for Shabbos (Rashi, Shemos 16:22). We therefore use two whole loaves at each Shabbos meal (Shabbat 117b; Rambam, Shabbos 30:9; Rama, Orach Chaim 291:4). Still, it is not necessary that one eat from both loaves; it is sufficient that only one be cut and eaten from (Mishna Berura 274:4). There is a custom to place twelve loaves on the table at each Shabbos meal representing the twelve lechem hapanim (“Showbreads”) placed on the Shulchan in the Mishkan (Be’er Heitev, Orach Chaim 274:2). While some regard this as a biblical obligation (Aruch HaShulchan, Orach Chaim 274:1; Taz, Orach Chaim 678:2; 21 Shaar HaTzion 271:11), others maintain that the obligation is rabbinic (Magen Avraham 254:23). Both men and women are obligated (Mishna Berura 274:1). During the Shabbos night meal, the bottom loaf is sliced first; during the Shabbos day meal (and all other meals on Shabbos), the top loaf is sliced first (Shulchan Aruch and Rama, 274:1). Some have a custom to make a shallow slice in the loaf one intends to cut prior to reciting the blessing to minimize the delay between the blessing and the eating of the bread (Mishna Berura 274:5).. The Jewish Home | FEBRUARY 7, 2019 Living it up LOWER DIVISION UPPER DIVISION LAYERS of C A M P O R A H ! D E S I G N A N D C O N Q U E R . 5 1 6 . 9 8 7 . 8 8 5 3 A T CAMP DIRECTOR: LEEBA BRISK PROGRAM DIRECTOR: ELISHEVA SEGELMAN Upper division head: ruchi dunn lower division head: chani jacobs phone: 718.324.6724(ORAH) EMAIL: ORAHDAYCAMP@GMAIL.COM W E B S I T E : O R A H D A Y C A M P. C O M 89 90 28 FEBRUARY 7, 2019 | The Jewish Home FEBRUARY 7, 2019 | The Jewish Home The Wandering Jew Return to Russia PART I By Hershel Lieber P esi and I returned to Russia in 1993, after an absence of eleven years. The last time I was there in 1982, the vast country was known as the U.S.S.R. Not that I didn’t attempt to travel there since, but the KGB was not interested in having me come and would not approve my visa application. In 1982, as a founding member of the Vaad L’Hatzolas Nidchei Yisroel, I was there twice. The first time was in January, together with Pesi and Rabbi Mordechai and Alice Neustadt, on a fact-finding mission to determine how we can help Soviet Jews connect to their Jewish heritage. The second time was in December with Zolly Tropper to give shiurim and chizuk to the fledgling baal teshuva movement started by Ilya Essas. The U.S.S.R. fell apart in December of 1991, and fifteen countries emerged in its place. The Vaad that was clandestinely sending shlichim there for the past ten years to teach Torah and Yiddishkeit saw this as a great opportunity. Religion was not taboo The yeshiva in Tbilisi, Georgia Novominsker Rebbe and a group of philanthropists. Our previous trips and hands-on knowledge would enable us to rally support for the Vaad’s efforts to rebuild Yiddishkeit in the former Soviet Union. We had already made plans to travel to a Jewish summer camp in Poland for two weeks in mid-August where I would lecture and teach. Being that the group was leaving in late July, we readily accepted to join them for what turned out to be a most memorable and challenging journey. The Vaad arriving in Baku, Azerbaijan anymore, and the opening of educational institutions was now permitted. The Vaad established several mosdos in Moscow and St. Petersburg (Russia), Tbilisi (Georgia), Baku and Kuba (Azerbaijan), and Kishinev (Moldova). To raise funds for these projects, they organized trips for potential donors to familiarize them with the existing circumstances and the Vaad schools. In 1993, Pesi and I were invited to participate on a trip headed by the After stopovers in Amsterdam and Stockholm, we finally arrived in Moscow. The group stayed in a more exclusive hotel, but we wound up in the shabby Hotel Minsk on Tverskaya Street. The next day, after Shacharis at the Choral Synagogue, the entire group walked around Red Square and visited the Kremlin. During our previous trips, we hid our Jewishness and hoped we would go unnoticed. In stark contrast, on this trip, our very noticeably Jewish group, many with beards, Jews from Kuba greeting members of the Vaad The Jewish Home | FEBRUARY 7, 2019 The Jewish Home | FEBRUARY 7, 2019 29 91 Gala Melava Malka at the summer camp near Moscow payos and yarmulkes, walked freely and fearlessly on the very cobblestones that the cruel Czars and ruthless dictators Lenin and Stalin had treaded. Shabbos we spent at a boys’ camp, a forty-minute drive from Moscow. The camp was run and financed by Rabbi Naftoli Zucker. Aesthetically, this place left a lot to be desired. It was probably our first and hopefully last time we slept on straw mattresses and pillows. Despite the austere surroundings, this Shabbos was such a spiritual high; it was an experience we still remember. The davening, the festive seudos, the lively zemiros, the inspirational speeches, the singing and the dancing amid a sea of children of all ages was emotionally rousing! Many of these participants eventually became yeshiva students and today are proud and ehrliche Yidden. There were shiurim for the guests and opportunities to study and connect with the children. A melave malka concert by Reb Abish Brodt wrapped up a Shabbos which was me’ein olam habah. On Sunday, we left by plane to Azerbaijan and landed in Baku. There our whole group boarded a large helicopter and flew to the small town of Kuba. Kuba was divided by a river, The Chacham of the Jewish community in Kuba where Muslims lived on one side and Jews on the other. Stalin had murdered all the rabbis, except for one, who had kept the traditions of the community alive. The community was to a great extent ignorant about Torah and halacha, but whatever Yiddishkeit did exist was nothing short of a miracle. As the helicopter landed on the dried riverbed, the entire town came streaming down the hillsides to greet us. Men and women, boys and girls, all smiling and laughing, with faces glowing with joy. I cannot forget the scene of all the townspeople surrounding our helicopter, welcoming the Rebbe and our group. It was a scene that I imagine we would see when Moshiach will arrive, b’mheira b’yameinu. Our adventure began when we were flying back from Kuba. Suddenly, I saw that we were landing in a deserted plateau, and we were told that we must refuel. We saw no gasoline fa- The deligation from Vaad L’Hatzolas Nidchei Yisroel headed by the Noviminsker Rebbe in Red Square humor of the situation did not escape us. After a while, some members of the group negotiated a more realistic amount and paid the ransom. We started out in Moscow, refueled in Volgograd, landed in Baku, flew with the helicopter to Kuba, were hijacked and returned to Baku, and now we finally arrived in Tbilisi. cilities, so we were a bit confused. We were told that we must leave the plane while refueling, which we naively did. Once out on the rocky plain, we were told that if we want to return to Baku, we must give them a lot more money. Basically, we were being hijacked and blackmailed. It was surreal. We had contrasting emotions. On the one hand, we were frightened, and yet the Finally, we boarded and flew back to Baku. Though our original plans called for us to spend a few hours visiting our school there, it did not happen. At the airport we saw flashes of light in the sky and heard loud thunder-like sounds in the direction of the city. We were then informed that there was a gunfire battle between Armenia and Azerbaijan, and it was not safe to enter the city. We waited awhile and then boarded again for our next destination: Tbilisi, Georgia. It was an awfully long day. We started out in Moscow, refueled in Volgograd, landed in Baku, flew with the helicopter to Kuba, were hijacked and returned to Baku, and now we finally arrived in Tbilisi. We were extremely tired and kvetchy and could not wait to take a shower and get into bed. After our first night in that shabby, roach-infested hotel in Moscow, followed by two nights on strawfilled mattresses, we really needed a good night’s rest. We thought we were seeing a mirage when our bus pulled up to a brand-new, tall, luxury hotel with a mahogany embellished and well-lit lobby. After a few minutes of unabashed excitement, we realized that this hotel was reserved for those who made special arrangements and paid the premium to stay there. The rest of us were driven and deposited at a three-story rundown, wooden edifice surrounding a courtyard. The building was straight out of a 17 th or 18th century inner city slum. This was going to be our hostel for the night. We 92 FEBRUARY 7, 2019 | The Jewish Home 30 FEBRUARY 7, 2019 | The Jewish Home Low Cost Quality Insurance OF THE BOSTONER BAIS MEDRASH PROFESSIONAL CHILDCARE CHILDREN AGES 21/2 & 3 IN A LOVING ENVIRONMENT 0 PROVIDING THE FOUNDATION TORAH EDUCATION OF 0 EARLY CHILDHOOD EDUCATIONALLY ENRICHED CURRICULUM Our dilapidated hotel in Tbilisi dragged in our heavy luggage (since we were going to be in Europe for four weeks) into our first-floor room, and as we walked around, we realized that the rug-covered floor was soaked in water. I called the manager who explained that the room is on top of a Turkish bathhouse and steam vapors are causing this to happen. The other option, which we took, was a thirdfloor room where we had to schlep our luggage up an open-air staircase. There, although there was no water on the rug, there was also no water in the shower. Needless to say, we did not have a very pleasant night but we were too tired to care. The next day we visited both the Sephardi and Ashkenazi synagogues. We stopped at the kosher butcher and inspected the mikvah. We davened at the yeshiva and saw the students learning Torah. We had a traditional breakfast of tandoori bread, tomatoes, olives and spicy dips as we listened about Jewish life in Tbilisi from Chief Rabbi Ariel Levine. Our group was very impressed and an appeal to help maintain our projects was made on the spot. Our donors came through by enthusiastically pledging considerable support for the Vaad mosdos. Our successful trip was winding down, but not over yet. That afternoon we again boarded a plane, which took us to our final stop: Kishinev, Moldova. There, unbeknown to us, we faced another adventure and challenge. To be continued… Hershel Lieber has been involved in kiruv activities for over 30 years. As a founding member of the Vaad L’Hatzolas Nidchei Yisroel he has traveled with his wife, Pesi, to the Soviet Union during the harsh years of the Communist regimes to advance Yiddishkeit. He has spearheaded a yeshiva in the city of Kishinev that had 12 successful years with many students making Torah their way of life. In Poland, he lectured in the summers at the Ronald S. Lauder Foundation camp for nearly 30 years. He still travels to Warsaw every year – since 1979 – to be the chazzan for Rosh Hashana and Yom Kippur for the Jews there. Together with Pesi, he organized and led trips to Europe on behalf of Gateways and Aish Hatorah for college students finding their paths to Jewish identity. His passion for travel has taken them to many interesting places and afforded them unique experiences. Their open home gave them opportunities to meet and develop relationships with a variety of people. Hershel’s column will appear in The Jewish Home on a bi-weekly basis. 0 SPACIOUS INDOOR & OUTDOOR PLAY FACILITIES 0 MORAH CHANSIE IS A RECOGNIZED EXPERT WITH OVER 30 YEARS OF CHILDCARE EXPERIENCE 0 HELP YOUR CHILD REACH THEIR EDUCATIONAL POTENTIAL!! Limited Space Available for Fall Registration Contact Rebbetzin Chansie Horowitz 516.371.6848 At a Motzei Shabbos concert at summer camp TheJewish JewishHome Home||OCTOBER FEBRUARY29, 7, 2015 2019 The 931 My Israel Home Combating Illegal Construction By Gedaliah Borvick E ver since the founding of the State of Israel, the country has been in growth mode. Despite the seemingly endless construction nationwide – the joke is that Israel’s national bird is the “crane” – Israel has a shortage of over 100,000 apartments due to the country’s expanding population.. – to complete. Upon conclusion of the suit, the violator would only be penalized up to 15,000 NIS (under $4,000), a mere slap on the wrist. Understandably, it didn’t pay for the government to act, due to the length of litigation and the paltry penalty sums. The only issue that gave buyers pause was financing: the cons The joke is that Israel’s national bird is the “crane” vative both the builder of the illegal space as well as the apartment owner – and the penalties are large: a 1,400 NIS daily fine for up to 90 days, plus up to 300,000 NIS, depending on the amount of space illegally constructed. What It Means for You. By investigating thoroughly, you will be armed with the knowledge necessary to make sound legal and business decisions prior to purchasing your apartment. Disclaimer: This article was meant for informational purposes only and should not be construed to come in place of using legal counsel and hiring professionals to carry out all due diligence, including reviewing all legal and planning issues, prior to purchasing an apartment. Gedaliah Borvick is the founder of My Israel Home (), a real estate agency focused on helping people from abroad buy and sell homes in Israel. To sign up for his monthly market updates, contact him at gborvick@ gmail.com. 72 94 72 OCTOBER 29, 2015 | The Jewish Home FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home State of the Union 2019 President Trump: “We Must Choose Between Greatness or Gridlock” By Susan Schwamm P olitical theater was on full display on Tuesday night when President Trump delivered his second State of the Union address in front of a divided Congress. With the Democrats holding a majority in the House of Representatives and Speaker of the House Nancy Pelosi seated behind him, Trump reached across the aisle. “I stand here ready to work with you to achieve historic breakthroughs for all Americans,” the president declared. In a speech which many anticipated would bear more of a resemblance to a political wrestling match than a civilized discourse, the president struck a gracious tone and asserted, “Victory is not winning for our party. Victory is winning for our country.” He also touched on the vitriolic hatred of him that Democrats and the media has by declaring, “We must reject the politics of revenge, resistance, and retribution…. We must choose between greatness or gridlock, results or resistance, vision or vengeance, incredible progress or pointless destruction. “Tonight,” the president said, “I ask you to choose greatness.” Mr. Trump touted the economy and noted that, since his election, the U.S. has added 5.3 million new jobs, including 600,000 new manufacturing jobs; nearly 5 million people have stopped collecting food stamps; and more people are working now than at any other time in our history. He noted that just last month 304,000 jobs were added. “An economic miracle is taking place in the United States,” he noted, “and the only thing that can stop it are foolish wars, politics, or ridiculous partisan investigations.” In perhaps the most direct swipe at his adversaries he declared, “If there is going to be peace and legislation, there cannot be war and investigation. It just doesn’t work that way!” The Jewish Home | OCTOBER 29, 2015 The Jewish Home | FEBRUARY 7, 2019 For a president who is usually shielded by his twitter feed, Tuesday night was the first opportunity in a long time for millions of Americans to see Mr. Trump in an official setting, rather than at an impromptu press conference with helicopter blades humming behind him. If he is really an unread, unhinged, undisciplined and unpresidential fool – as the talking heads in the media portrays him to be – well, it was not on display on Tuesday night. Rather, what Americans saw was a president who is very much in command of the issues, has a vision for the country, and – perhaps most surprising to some – was able to deliver a speech with the same finesse as his silver-tongued predecessors. Although Democrats entered the Chamber with dour looks on their faces and early on during the speech sat on their hands, towards the end of the speech they joined Republicans in singing “Happy Birthday” to an 81-year-old Holocaust survivor and survivor of the Pittsburgh massacre who was in the audience. Mr. Trump then ad-libbed to Judah Samet, “They would never do that for me; they would never do that for me.” Samet, in turn, waved and blew a kiss to the president. Another Holocaust survivor was in the audience on Tuesday night. Joshua Kaufman was in Dachau when he saw American soldiers roll in on tanks. Mr. Trump recalled Joshua’s thoughts at the time. “To me…the American soldiers were proof that G-d exists, and they came down from the sky.” Seated next to Joshua at the address was Herman Zeitchik, an American soldier who helped liber- “Under my Administration, we will never apologize for advancing America’s interests.” ate Dachau. When Mr. Trump highlighted that more women are employed today than ever before, a few dozen Democrat females, wearing white as a symbolism of female power, jumped to their feet and began laughing and pointing to themselves, as if to indicate that the reason more women are working is because of newly elected Democrat women who swept their party into power. Mr. Trump smiled, turned to the Democrat women, and said, “Remain standing because you are going to like this next line,” before declaring, “We also have more women serving in the Congress than ever before.” With the recent government shutdown and back-and-forth over a border wall having delayed Mr. Trump’s address for one week, the president made a passionate plea for common sense. “No issue better illustrates the divide between America’s working class and America’s political class than illegal immigration. Wealthy politicians and donors push for open borders while living their lives behind walls and gates and guards,” he noted. “Meanwhile, working class Americans are left to pay the price for mass illegal migration – reduced jobs, lower wages, overburdened schools and hospitals, increased crime, and a depleted social safety net.” He underscored, “Tolerance for illegal immigration is not compassionate – it is cruel.” Mr. Trump pointed out that one in three women are assaulted while making the journey over the border and that children are exploited. ,” the president told Congress and Americans around the country. Along the same lines, the president vowed, “Now is the time for the Congress to show the world that America is committed to ending illegal immigration and putting the ruthless coyotes, cartels, drug dealers, and human traffickers out of business.” The president noted that walls have worked to provide security to our nation. He pointed to San Diego, which used to have the most illegal border crossings in the country. Since a border wall was erected, the illegal crossings have ended. Mr. Trump also mentioned the border city of El Paso, Texas, which used to be pummeled by high rates of violent crime and was considered one of the nation’s most dangerous cities. But now, with a wall in place, residents there experience peace and security. “Simply put, walls work and walls save lives,” the commander-in-chief noted. “So let’s work together, compromise, and reach a deal that will truly make America safe.” Mr. Trump also took on the new wing of the Democrat Party which openly advocates for socialism. “Here, in the United States, we are alarmed by new calls to adopt socialism in our country. America was founded on liberty and independence – not government coercion, domination, and control.” As the cameras panned the audience and focused on Vermont Sen. Bernie Sanders, a proud socialist, who was squirming in his seat, the president declared, “We are born free, and we will stay free. Tonight, we renew our resolve that America will never be a socialist country.” Sanders glowered while chants of “U.S.A.” could be heard from Republicans in the chamber. Mr. Trump turned his focus to trade and American progress. He noted that his administration has been working to stem the tide of Chinese theft of American intellectually property and spoke of tariffs imposed on the Asian country as a result. He also mentioned NAFTA and said that although other politicians promised Americans a “better” deal, “no one ever tried – until now.” Speaking about an emotional topic that has recently garnered headlines, the president pulled on people’s heartstrings. “There could be no greater con- 73 95 Astronaut Buzz Aldrin gives the president a thumbs up during his speech Judah Samet, survivor of the Holocaust and the Tree of Life Synagogue shooting, blows a kiss to President Trump during the speech Bernie Sanders called Trump’s speech “racist” Alexandria Ocasio-Cortez looked dour as Trump wowed America 96 74 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home trast.” Mr. Trump asked.” Towards the end of his speech, President Trump spoke about his mission to protect the United States’ national security. “Under my Administration, we will never apologize for advancing America’s interests,” the president asserted. He mentioned China and Russia and reiterated his desire for peace on the Korean Peninsula. Speaking about meeting with Kim Jong Un in Vietnam on the 27 th and 28th of this month, the president noted, “Our hostages have come home, nuclear testing has stopped, and there has not 24 hour emergency service ravel with t e w yo u ear Round -Y e the We servic ion e r Catskill g info@kingscountyauto.com 718.399.9500 168 Walworth Street Brooklyn, NY 11205 been a missile launch in 15 months…. Much work remains to be done, but my relationship with Kim Jong Un is a good one.” President Trump has consistently shown his understanding of the Middle East and America’s enemies there. He said that he “proudly” opened the U.S. Embassy in Jerusalem for this reason and pledged to walk back American troops from battlefields in Syria and Afghanistan. “The hour has come to at least try for peace,” he declared. Speaking of Iran, which he said is the “world’s leading state sponsor of terror,” the president said, .” Trump ended his address to the nation on a high note, urging members of Congress to remember the accomplishments of those who came before them and to be inspired by what they can achieve. “Think of this Capitol – think of this very chamber, where lawmakers before you voted to end slavery, to build the railroads and the highways, to defeat fascism, to secure civil rights, to face down an evil empire,” he urged. .” Urging both parties to come together, the president said, “We must choose whether we are defined by our differences – or whether we dare to transcend them. “We must choose whether we will squander our inheritance – or whether we will proudly declare that we are Americans.” “No matter the trials we face, no matter the challenges to come, we must go forward together.” He added, “This is our future – our fate – and our choice to make. I am asking you to choose greatness.” Bringing Congress and the nation to its feet, President Trump proclaimed, “No matter the trials we face, no matter the challenges to come, we must go forward together. We must keep America first in our hearts. We must keep freedom alive in our souls. And we must always keep faith in America’s destiny – that one Nation, under G-d, must be the hope and the promise and the light and the glory among all the nations of the world. “Thank you. G-d bless you and G-d bless America.” The Jewish Home | FEBRUARY 7, 2019 The Jewish Home | OCTOBER 29, 2015 97 75 98 70 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home DO YOU REALLY THINK TOM BRADY IS HAPPY? By Noam Fixler If you answered, “He’s certainly not happy,” to the title question, you are right – and you should probably put down this article and turn the page. For the rest of you, I apologize for the title – it was just a way to get rid of that guy who shouldn’t be reading this. Mission accomplished. Regardless of whether Tom Brady is happy or not, I am happy that he won the Super Bowl last Sunday because it is the rare time that I can point to a sports superstar and highlight his qualities to my children. Now, just in case the guy from above snuck back in and is reading this, let me say: Of course, we have plenty of excellent role models within Yiddishkeit and there is absolutely no need to look outside of our ranks for role models. But the fact is that many of our children follow sports and know a lot about these sports players’ lives and characters. The information they’re fed and the character traits they’re looking at are usually negative. So, it’s nice for a change to have a sports star who has certain positive character traits that we can highlight. Yes, my children have tens of books about gedolim, which is their primary reading material and what we hope they model their behavior after, but sometimes you can reach a child by coming from a different angle. I think Tom Brady is the perfect conduit. Just to be clear, I don’t know anything about Tom Brady’s personal life but what we can emulate involves how he goes about succeeding in his professional life. Before the NFL draft, pro-football scouts analyze and investigate every aspect of the prospective draft choices. The p l a y ers are put through rigorous physical tests, and the game-film of every game they ever played from college to middle school is scrutinized. Tom Brady was picked late in the 6th round of the 2000 draft. That means that he was passed over 199 times before finally being chosen. Nineteen years later, he holds the record for most regular season wins by a starting quarterback (207), most division titles (16), most playoff wins (30), most Super Bowl appearances (9), most Super Bowl wins (6), and most Super Bowl MVPs (4). Additionally, at the age of 41, he is also the only player from the 2000 draft that is still playing in the NFL. It’s common for players who are selected high in the draft to not live up to their potential. But it’s un- heard of for a player to go from the bottom of the heap to the top. And Brady went to the top of not only his class but is arguably the best or one of the best players to ever play the game. So how did the NFL scouts get it so wrong? The answer is that they didn’t. Brady just rose above his natural abilities. How he did that is what we can all learn from. SELF-BELIEF According to Brady’s roommate at Michigan, when Brady was a third string quarterback in college he would constantly say, “I’m going to be a starting quarterback in the NFL.” Brady’s roommate told ESPN that several years later, in 2001, Brady was a backup quarterback on the Patriots and they were together The Jewish Home | FEBRUARY 7, 2019 The Jewish Home | OCTOBER 29, 2015 at an event. Brady said, “You know, I think I have a great shot at being quarterback this year,” and he proceeded to explain how it was going to happen. Someone at the event quipped, “There’s [Drew] Bledsoe, there’s Michael Bishop. What is Tommy even thinking?” Sure enough, Brady ended up as the starting quarterback and led the Patriots to their first-ever Super Bowl victory. The Lubavitcher Rebbe used to always say, “Tracht gut un es vet zein gut, Think good and it will be good.” Brady is certainly not a Lubavitcher but his positive thinking and his self-belief have indisputably played a role in his success. HARD WORK There are countless anecdotes about how hard Brady works. In a recent interview, a Patriots executive recalls leaving the Patriots facility late one Friday evening in March several years ago. As he was driving out, he noticed that the lights on the practice field were on. He thought that it was odd that they forgot to shut the lights on the field that day, so he turned around and went to shut the lights. Sure enough, he observed Tom Brady on the field alone throwing footballs into a net. Brady was caught throwing balls late at night – after everyone left the facility. That is impressive enough. But more than that, this event took place in March, just a few weeks after the football season was over. Brady practicing late at night in March is akin to a Jewish mother deep-cleaning her home two weeks after Pesach or a teen burning the midnight oil studying algebra days after the regent. Most football players spend the time after the season basking in the sun or lying in the sand. Brady? Fuhgeddaboudit. He’s still at it, as if the playoffs are days away. Former Patriot Rodney Harrison told ESPN a story about how hard Brady works: “When I first got to New England, we [Harrison and Brady] responded, “Who’s my hero? That’s a great question... Well, I think my “BEFORE HE COULD SAY ANYTHING TO ME, I LOOKED AT HIM AND SAID, ‘MAN, I DON’T [CARE] WHAT YOU SAY, TOM, I’M NOT COMING IN EARLIER THAN 5:30!’” [care] what you say, Tom, I’m not coming in earlier than 5:30!’ We both laughed at that.” Brady goes to sleep at 8:30 p.m. so he can get up at 5:30 a.m. to start his day. “I don’t go to bed at 1 a.m. and wake up at 5 a.m. and say, ‘Let’s see if I can get this done today,’” Brady told ABC News in 2016. “Because my career is so important, I think I make a lot of – I wouldn’t call them sacrifices – but just concessions for my job.” Early mornings, late nights, working harder than anyone else… those values translate into success. FAMILY In most post-game interviews, before answering the reporter’s first question, Brady looks at the camera and says, “Hi, Mom and Dad, I love you.” Some may think it’s an act, but under Brady’s high school yearbook photo it says, “Family, I love you all.” That was before he became famous. Now, with six Super Bowl rings, Brady has not forgotten from where he came. When asked last year who his hero is, Brady got choked up and dad is my hero, because he’s someone I look up to every day.” For children who are reading this 99 71 article, remember that your parents, siblings, and grandparents, aunts, uncles and cousins will always love you and will always be there for you. Don’t forget from where you have come. There are so many role models within your family whom you can – and should – look up to for guidance and direction. Even when you spread your wings, remember that your parents are the ones who have helped you to learn to fly. As for the adults reading this piece, well, don’t you want your children to call you their hero? Now’s the time to impart the important lessons of life to your children. Do it with love and acceptance; show your children that you believe in them; show them that they have what to be proud of. Our children will be navigating their own worlds soon and – whether they’re holding the Lombardi Trophy or not – they too should draw strength from the love and lessons that they have received from those who will love them forever. 100 60 FEBRUARY29, 7, 2019 OCTOBER 2015||The TheJewish JewishHome Home Dating Dialogue What Would You Do If… Moderated by Jennifer Mann, LCSW of The Navidaters Dear Navidaters, I am a 23-year-old Ashkenaz young woman. Normally, I wouldn’t point out to anyone that I’m Ashkenaz, but it relates to my question. A friend of mine recently married a young man who is Sephardi. She is trying to set me up with her husband’s good friend, who is also Sephardi. I never really considered dating anyone from such a background. I have an aunt who got married to a Sephardi man many years ago and eventually got divorced, claiming that their cultural differences were too much, but even more important, she never really felt that her in-laws and their very tightknit extended family and friends ever really accepted her and her family. She was forced to spend most holidays with her in-laws, and her family felt like second class citizens. The man my friend wants to set me up with sounds wonderful, but I can’t help remembering all the talk about what my aunt went through and wonder whether it makes any sense at all to even get started with someone like this. I know that in general, the Sephardi community is very closely connected and tend to stay with their own. But I’m wondering whether times are changing and there is a greater acceptance of people like myself, who are from an Ashkenaz background. Any advice as to whether I should stay clear or whether I should give it a shot? | OCTOBER 29, 2015 The Jewish Home | FEBRUARY 7, 2019 61 101 The Panel The Rebbetzin Rebbetzin Faigie Horowitz, M.S. t’s easy to give a simple answer to this question. My response is to do serious soul searching and begin a mindful process. Don’t give an answer right away. You can tell the person setting you up that you need some time to think about this. Some of the things to consider are your past experiences with other Jewish cultures. Do you live in a monolithic community? Have you always stayed within your cultural group of origin? Have you spent Shabbos in other kinds of Jewish homes? Have you spent extended periods of time with other kinds of Jewish adults and observed differences in mindsets, practice, norms, relationships, family roles, and traditions? Are you an easy adapter to major changes? Keep in mind that Jewish families and life are centered around Shabbos and holiday celebrations. That means that they are closely connected on a regular basis. A marriage between members of distinctly different Jewish cultural groups is not just about the couple’s relationship; it’s about dealing forever with being an outsider and newcomer in a group you need to actively join. It’s about a lot more than having your minhagim take a backseat. It’s about being active and embracing a lot of values and expectations that you cannot understand until a while later. A number of years ago, I wrote a magazine article on Jewish “mixed marriages.” I interviewed several long-married couples as well as veteran therapists who do couples counseling. Expectations were a big subject. Extensive advance discussion about how things are done in each culture should be done so that there are fewer disappointments. The marriage partners need to have personal strength. Parental involvement has the potential for complicating things more so than with I marriages within the same culture, pointed out one professional. If the couple has a chance to work things out on their own or with objective outside help, the differences will be smoothed out and both will reach a level of comfortable accommodation of each other’s background. I vividly remember a South American native who married a Midwesterner telling me that one’s relationship and one’s home should be an impregnable fortress. In-laws, dominant cultures, and expectations must remain on the outside. Are you mature enough to deal with the above in an ongoing process that continues for years? You are the only one who can answer this question. The Mother Sarah Schwartz Schreiber, P.A. rue confession. I am the product of a mixed marriage. My father came from Galicia; my mother from Klausenberg. Back in the post-War era, their imprudent Polishe-Hungarian alliance seemed hopelessly condemned to marital friction and discord. Instead, their marriage was harmonious, lasting more than forty years. When it was my turn, I duly “married out” – squelching my warm, neo-Chassidishe background to marry a Litvak. No biggie, you say? The first time I confronted kneidlach (gebrokts!) on Pesach, I trembled in fear of an imminent lightning bolt descending from On High, cracking the seder plate and striking me speechless if I so much as tasted those forbidden orbs bobbing in fragrant broth. Polish marrying Hungarian? Litvaks marrying Chassidim? Ashkenazim marrying Sephardim? Puleeze! So many of the cultural biases of earlier generations seem so outdated and irrelevant by today’s standards. The year is 2019, my friend. Kib- T beh is served alongside kugel. Traditional Ashkenazic yeshivas are home to thousands of Sephardic students. Ashkenazim and Sephardim, like your friend and her husband, socialize and intermarry every day and – what’s more – get along very well. Sure, cultural compatibility is a positive force in marriage; there’s less room for misunderstanding when two people “speak the same language” or “get” where the other’s coming from. Still, it’s not everything. Think about it: your aunt’s unfortunate marriage was destroyed by her primitive, meddlesome in-laws, not her Ashkenazi-ism. If culture was everything, same-race couples would rarely split. If you and your family can get past the Sephardic factor, I encourage you to meet this “wonderful young man.” Is he defined by his Sephardic-ness or is he a baal middos who happens to be Sephardic? If you feel there is merit (i.e., respect, sincerity, intelligence, attraction) in this match, prepare for some focused conversations. Talk to the young man about rituals and traditions, his relationship with his parents and his prospective in-laws, his thoughts about sharing chagim, his ethos about raising a family. Take your time to get comfortable with both him and his culture. You know, now that I think about it: rice on Pesach? Even better than kneidlach! The Shadchan Michelle Mond ou are correct in your perception that many Sephardic families live a very different lifestyle than Ashkenazim. Shana rishona can be hard enough, and, depending on how you grew up, the nuances and intricacies of the cultural norms can be enough to make this kind of mixed marriage difficult to navigate. I will not downplay your concern that marrying into a different culture will likely be diffi- Y Polish marrying Hungarian? Litvaks marrying Chassidim? Ashkenazim marrying Sephardim? Puleeze! cult to get used to. While there are many culturally Sephardic families, there are also some who consider themselves “Sephardic by chance.” It sounds like you are turned off by your aunt’s extreme experience many years ago. Much has changed in the past few decades. You can’t compare your aunt’s ex-husband from so long ago to a shidduch prospect coming up right now. At the same time, it is understandable to be cautious. Your first step should be calling references to find out more about the boy and his family. After all, there must have been a good reason your friend thought of this shidduch! Are they a family that is a few generations American who happen to eat rice on Pesach? You won’t know until you ask. Based on the information you hear, you can make an informed decision about whether you are ready to meet this young man despite your differences. The Single Tova Wein hough it’s never good to generalize, for the sake of your questions, I feel that it is the only way to begin my answer. From my own ex- T 62 102 OCTOBER 29, 2015 | The Jewish Home FEBRUARY 7, 2019 | The Jewish Home periences, I have to agree that there are very obvious differences between typical Ashkenaz and Sephardi families. Aside from certain observances, expectations within families are also very disparate. Some of my Sephardi friends that I knew growing up spent every Shabbos meal and yom tov with grandparents, aunts, uncles and cousins. At the time, I thought they were so lucky, having such a huge extended family always around them. But as I got older, I started wondering what it must be like to never be allowed to move far away from family and experience some autonomy. I also began to wonder what would happen when they married and the in-laws lived far away. As it happens, both of my two friends wound up marrying men who lived within their tightknit circle, and so the pattern continued smoothly. Entering into such a dynamic would therefore probably not be easy. I feel that as an Ashkenazi, you would feel like the outsider, always looking in and having to make concessions. And these concessions would probably also affect your immediate family as well. Now that I’ve talked in general terms, there are always exceptions to every rule. Maybe this young man comes from a Sephardi family that operates very differently and subscribes to values that are much more similar to the ones that you are familiar with. It’s possible, and therefore, if you’re truly interested in exploring this opportunity, What unites us is far greater than what divides us. ask around. See if he has any married siblings and learn as much as you can about the way they live and whether it is something that could feel right for you. Either way, go into it with your eyes wide open so that you are not ultimately unpleasantly surprised. Pulling It All Together The Navidaters Dating and Relationship Coaches and Therapists I ’m so glad you wrote in. This is a delicate question and a sensitive topic. My intention is twofold: I want to be culturally sensitive and explore your concern. If you were a Sephardi woman writing in with concerns about dating an Ashkenazi man, my answer would be along the same lines. Over the years, I’ve had a handful of clients who have been in your shoes. Unfortunately, as with most big decisions, there is no one right answer and no crystal ball to tell you whether this will work out. Anyone you ask for advice who offers you a “yes” or a “no” in the way of telling you what to do will be advising you based on their own life experiences or stories they’ve heard, both positive and negative. I am so sorry that your aunt wasn’t in a great marriage. When we intimately witness a loved one go through any harrowing event or disturbing long-term situation, we are, in a way, living through it as well and it can inform our own future decisions. Had her marriage been wonderful, you may be more inclined to say, Of course I’ll date this guy. My aunt has the best marriage. There are Ashkenazi/Sephardi marriages that don’t work out due to cultural differences, extended family dynamics, lack o f respect, etc. And there are Ashkenazi/ Sephardi marriages that work beautifully; they are culturally sensitive, openminded and loving. There is an ability and willingness to focus on the person, not the heritage. There is the idea that what unites us is far greater than what divides us. Personally, I have a very hard time with sweeping generalizations, and I encourage my clients to keep an open mind when being set up. However, if marrying someone outside of your culture is an absolute non-negotiable clause, that is a personal choice that I usually do not challenge. I think there are two considerations here. The first is: how do you feel about change? The second is related to finding out how he and his family welcome Ashkenazim into their home and family. Some food for thought on the subject of change: How do I feel about change? How do I feel about compromise? As I think about change, what is coming up for me? Whatever that is, am I willing to look at it, explore it and work through it? Am I open to a different culture and customs? How do I feel about the notion that Mr. Right could potentially be Sephardi? Some things to look out for should you decide you are open to change and begin to date: How do I feel around him? Is he respectful of my culture and values? Am I respectful of his? Does his family treat me well? If they don’t, what does he have to say about it and how does he handle it? As several of the panelists already mentioned, should you decide to date people outside of your culture, it is very important to pay attention to the way the family receives you. More often than not, families do not pull a “bait and switch.” Yes, people are on their best behavior (hopefully) while their children are dating. But, if you pay close attention, you will get a good feel for the way the family feels about you and your culture. I will share that I grew up with Ashkenazim and Sephardim. I would say my school was 50-50. My best friend was Sephardi. I knew lots of children whose families were “blended.” I loved being at my best friend’s house. They were the warmest family, and I couldn’t get enough. Saturday night sleepovers were the best because of Sunday morning breakfasts filled with the most delicious food I ever ate. When my mother would pick me up, my friend’s mother would give her a giant bear hug, call her name with exquisite delight, and invite her in to eat too. Ultimately, should you decide to date this “wonderful” guy from a different culture, it will be a tremendous learning opportunity. The worst that will happen is that it will be another life experience under your belt. Keep in mind two things. First, if he is willing to date you, then we have to consider that he is openminded. (A wonderful trait!) Second, it’s just coffee and a danish, not tanaim. All the best, Jennifer Esther Mann, LCSW and Jennifer Mann, LCSW are licensed psychotherapists and dating and relationship coaches working with individuals, couples and families in private practice in Hewlett, NY. To set up a consultation or to ask questions, please call 516.224.7779. Press 1 for Esther, 2 for Jennifer. Visit for more information. If you would like to submit a dating or relationship question to the panel anonymously, please email thenavidaters@gmail.com. You can follow The Navidaters on FB and Instagram for dating and relationship advice. 52 OCTOBER 29, 2015 | The Jewish Home The Jewish Home | OCTOBER 29, 2015 The Jewish Home | FEBRUARY 7, 2019 63 103 Dr. Deb Be a Man By Deb Hirschhorn, Ph.D. T his guy was pretty unhappy. He said he didn’t want to think about the possibility that his wife would end their marriage. He spoke with great pain about that fight they had in which the horrible “D” word was uttered. As we were going through the call and I was trying to understand the chronology of how things went down, I made the – incorrect – assumption that this bad fight had happened in the last few days. Wrong. “So when did this happen?” I casually asked about a half hour later. “Oh, that was two years ago.” What?? Two years ago??? “Why did you wait ‘til now to reach out to me?” “Well, I thought things would get better.” How? How could things get better when you do not know what made them bad? How can things get better when very clearly your wife must have been frantic with pain and frustration two years ago? And did things get better? Of course not. Why would they? He described the icy wall he lives behind, the things that maybe she was doing that he was terrified to look into further. Guys, wake up! The women are tired of waiting for you to not step up to the plate. If only that were the only man who said the same things. But it seems like droves of them are doing the exact same thing – which is nothing. Waiting. Hoping things will change based on absolutely no new learning, no new behaviors, no new attitudes, no new demonstration of a willingness to get help. My hair is turning gray from these men! And I imagine their wives’ hair is too. Oh, yes, there is one step they often take to show they want things different. They say, “Come to marriage counseling with me.” Well, I the problem. The man does not wait two years to get around to reaching out for help. Okay, that’s not you. I understand. These are skills that were not necessarily given out to every man at birth. That’s why there are people like me who are here to help you. I’m sitting here, waiting for you to reach out for help. Only you can do it. I can’t do it for you. But instead, the men just wait. For what? I cannot imagine. How did men get this way? How could things get better when you do not know what made them bad? have news for you, men: the women aren’t going for the same reason that they aren’t explaining to you what the problem is and that happens to be the same reason that they have built a wall and are heading out the door: They want you to be the man. The man knows what to do. The man takes leadership. The man understands their needs. The man understands his own needs and how to articulate them. The man “gets” them at the deepest core. The man takes action immediately to resolve I think they’re afraid to be aggressive. And that is the good part. They should not be aggressive. That would be like jumping out of the frying pan and into the fire. One guy explains that his mother yelled a lot so he learned to cow to women early in life. Another man explains that his mom left the family so he is terrified of making a move that will scare off his wife. I get that, but his non-move is chasing away his wife anyway. When there is nothing left to lose, c’mon, man, stick your neck out! A third man tells me his father just “took it” from his mother, and that is all he knows. Men, it’s time to learn differently. It’s time to become different. You cannot remain a wilting lily. You cannot remain scared of being who you were meant to be. You cannot sit back and keep expecting your wife to do the heavy lifting in the relationship. It’s not going to happen. Her muscles weren’t made for that. You’ve got to learn to do it. If you don’t, the thing you fear most will happen. And it will happen because of you. It’s better to face your fears and be the man. Without aggression, of course. If that puzzles you, I get it. Book a call to get some help. But, please, do not bother if you can’t make a decision. Do not bother if you will not be 100% committed to making the changes you need to make. So, yes, part of the change must take place before you book: plan to become the decisive, committed person you were meant to be. Dr. Deb is a Marriage & Family Therapist. Book a consultation with her to get clarity on the issues in your marriage and learn about her innovative program at:. To book a call with Dr. Deb, go to her scheduler,, but if you want more information about her new program, please first watch the Masterclass on “Getting The Marriage You Want” at. 104 22 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home Health & F tness How to Approach Shabbos the Healthy Way Side Dishes By Cindy Weinberger MS, RD, CDN W hen it comes to side dishes for Shabbos meals, think beyond the traditional favorites. Sure, the symbolic foods of a Shabbos meal include potato kugel, farfel, lokshen kugel, and deli roll, but however good they may be, they are not the best choices for your Shabbos side dishes. These foods are high in calories and low in nutritional value. Especially at this point in your meal, you can afford to spare all of these extra calories. Side dishes at a Shabbos meal can be tricky. By the time you are up to the main course, you most probably had your allotted starches, proteins and most definitely fat for the meal. Essentially, your only allowance is vegetables. You might have some wiggle room for starches if you ate mindfully prior to the main course. Therefore, my recommendation is to stick to vegetables as side dishes. During the week, where you have more leeway, you can stretch your dishes to be more creative, however, at a Shabbos meal where already used up your starches and fats and will most probably be hitting the caloric bar, your best bet is to stick with veggies. When I say vegetables, I mean all types and every type. Your vegetables can be made from fresh or frozen vegetables. You can eat them raw, cooked, roasted, broiled, grilled – however you ories. Eating vegetables are essentially freebies. By cutting out the kugels and carbs as side dishes, you are not only saving yourself extra calories, but by eating the vegetables you are actually gaining (not weight hopefully). You’re getting antioxidants and other benefits, If you’re making broccoli or spinach kugel, don’t think it’s “healthy” because the main ingredient is a vegetable. like – but not in a kugel. That completely defeats the purpose. Yes, there are some kugels that can be made healthier than others, and we’ll get to that later. Why do I stress the importance of eating vegetables as a side dish? Not only do vegetables offer nutrients, vitamins, and minerals, they are low in cal- while filling up at the same time. My suggestion is to serve cooked/ roasted vegetables on Friday night since they are better when hot. One great way of preparing vegetables that is so versatile and easy is grilled vegetables. Grilled vegetables are a classic. If you don’t favor one vegetable, try an- other. Try peppers, zucchini, eggplant, onions, mushrooms…you name it! I like grilling portobello mushrooms. Marinate mushrooms in balsamic vinegar, olive oil, garlic, mustard and fresh herbs for about 30 minutes at room temperature. Grill mushrooms for about 5 minutes on each side. Serve with tomato, onion slices, and lettuce. Or cook mushrooms with wine, olive oil, and spices. Another great idea is to baste vegetables such as red peppers, eggplant, summer squash, cherry tomatoes, mushrooms or onions with olive oil and season with herbs and grill them. Other types of veggies aside from the traditional ones include asparagus, Brussel sprouts, cauliflower, broccoli, green beans, roasted root vegetables, eggplant, cooked zucchini, lecho, and carrots. These can all be made with a simple, quick recipe of throwing on salt, pepper, garlic, olive oil, and roasting in the oven at 400ºF until ready. For Shabbos lunch, the number one side dish is a salad. Everyone will appreciate a salad. If you want to make it more fun, make a variety of small sal- ads. Start with a traditional vegetable salad (you can add in grilled chicken or turkey and use this as your complete meal). Others might enjoy a crunchy cabbage salad too. Spinach salad is another great option. Kale and arugula are very trendy as well. Trust me, you won’t be hungry. If you’re having guests and you want to make foods that are healthy that you can enjoy too, make an array of salads – as if you are at a shawarma salad bar. Your guests will absolutely love this. Prepare a carrot salad, a beet salad, cucumber salad, tomato salad, three-bean salad, pickled vegetables, Israeli salad, guacamole… any salad that comes to mind that does not contain a mayonnaise-based dressing. If you need a starchy side dish, go for a quinoa salad, quinoa patties, whole wheat noodles with cabbage, brown rice, sweet potatoes, butternut squash, corn on the cob, whole wheat linguini salad, or whole wheat angel hair pasta salad – these are basically a starchy vegetable or a whole grain starch. The Jewish Home | OCTOBER 29, 2015 23 The Jewish Home | FEBRUARY 7, 2019 105 stead of white flour. Instead of sugar, use honey or even a sprinkle of cinnamon. I advise making individualized small kugels either in muffin tins or small ramekins so that your portion size is controlled. One piece of kugel would be considered your starch and some fat (depending on the type of kugel) for that meal. At this point you are so full, who could even think of dessert? For those who always have room for dessert, read the next installment in this series to learn about some of the best options for dessert. If you insist on the kugels, make substitutions. For potato kugel, cut back on the oil or, even better, use coconut oil. Use egg whites instead of whole eggs. If you’re making broccoli or spinach kugel, don’t think it’s “healthy” because the main ingredient is a vegeta- WITH AN EMPHASIS ON SHABBOS, YOMIM TOVIM AND SIMCHOS DIET PLANS GROUP SUPPORT RECIPES EAT BETTER.FEEL BETTER. ble. Yes, the broccoli or spinach alone is healthy, but when mixed with oil, flour, mayonnaise, and eggs, the healthy part fades away. As mentioned above, use less oil, light mayonnaise instead of real mayonnaise, and egg whites. Try almond flour or whole wheat flour. 106 38 FEBRUARY 7, 2019 OCTOBER 29, 2015||The TheJewish JewishHome Home Health & F tness Diet Baggage Do You Have It? By Alice Harrosh H a): • Confusion about what diet to trust or choose, even after choosing one • Thinking some food groups are always bad, even if limited • Thinking some food groups are always good, even unlimited • Frequent switching from plan to plan in the hopes of finding “the best one” • Reading up on diets from non-reliable sources such as the internet or magazines • Inability to commit or stick to something long term in the hopes of finding something better • Inability to trust the practitioner If you recognized yourself in two or more of the above, you may have diet baggage. Diet baggage comes from having been on many different diets and being so burned out that you no longer trust or are fully willing to give yourself over healthily dan- gerous than drinking some diet soda. While the two above scenarios were food beliefs related, four babies she quickly got back to herself without much effort. Now, after baby number five, without much effort. Chaya has always been overweight, even as a child. At her wedding 12 years ago after strict dieting she managed to get to a size 12 and felt amazing. Twelve years and five • It’s been 12 years • She’s had five children • She’s older • Her body changed • two above scenarios, diet baggage applies to The Jewish Home | OCTOBER 29, 2015 39 TheJewish JewishHome Home||OCTOBER FEBRUARY29, 7, 2019 The 2015 107 39 When you suffer from diet baggage, you have a hard time giving your all or submitting yourself to the plan you’re currently on. Picture being in a restaurant, ordering something delicious, having it right in front of you and even enjoy- Trust the person you’re working with, commit yourself to one plan, and succeed! weight expectations too, low or high. What makes diet baggage even worse these days? The internet! “Dr. Google” tells people all kinds of things that they add to their diet baggage list. What’s wrong with having diet baggage? ing it but in the meantime wondering if you should have ordered something else or gone to the restaurant you went to last week instead… You already chose this place, your delicious food is in front of you, you even have to pay for it… why not just fully be present? you’re working with, commit yourself to one plan, and succeed! Alice Harrosh is a nutrition counselor and manager of the Lakewood, Queens and Five Towns locations of Nutrition by Tanya. Alice knows that making healthy decisions is not always easy. She understands that tempting foods can be hard to resist because she has been through the struggle herself. As an optimistic person, Alice’s favorite quote is: “It’s never too late to start eating better. If you have a bad morning, make it a better afternoon.” She can be reached at alice@NutritionByTanya.com. Longest running, most popular, lowest price Passover Program in California PassoverResorts.com (800) 727-7683 (323) 933-4033 108 48 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home Health & F tness Friendships and the Early Years of Life By Hylton I. Lightman, MD, DCH (SA), FAAP O psycho-social development and have reverberating implications for future development. Studies show that attributes such as social confidence, altruism, self-esteem and self-confidence are positively correlated to having friends. It is human nature for people to want to attach to other people. This is a universal conceptminded friends, the reality is this social time for young children is also an arena for experimenting and learning. They don’t interact the way teenagers do, let alone how we adults relate. Often, they will bang on the table. But even when they are playing sideby shedates, Ima or Daddy can swoop right in and give the otherdate. The Jewish Home | FEBRUARY 7, 2019 109 110 FEBRUARY 7, 2019 | The Jewish Home. navigate friendships. What if your child and his buddy fight or have a falling out? Focus on what your child is feeling and help him to process those feelings. Impart advice about the ups-anddowns of friendships. Never let your pain, Mommy and Tatty, become his pain. Respect your child’s personality when it comes to friendships. Some need to be in the thick of things, surrounded by scores of people. Others Respect your child’s personality when it comes to friendships. TSFAS-GALIL-KIVREI TZADIKIM Guaranteed Departure Every Monday HOTEL PESACH PACKAGES need just two or three good friends. There is no. The Jewish Home | FEBRUARY 7, 2019 AVI & SHNEUR FASKOWITZ PRESENT OUR 17TH YEAR! WESTIN BEACH RESORT & SPA FORT L AUDERDALE HAS BEEN EVERY ROOM NOVATED RE Y COMPLETEL GAD ELBAZ CONCERT RABBI DOVID M. COHEN • DR. ERICA BROWN • PROF. MARC B. SHAPIRO • WILLIAM B. HELMREICH 718-969-9100 info@majesticpassover.com • 111 112 66 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home In The K tchen Beer-Glazed Wings Meat • Yields 10 servings • Freezer friendly By Naomi Nachman I knew I had to have wings in my new cookbook because my husband, Zvi, always orders wings when we see them on a menu. Often wings are thrown onto a BBQ grill or fried, but I wanted to make it easy; I bake them in the oven and glaze them with a fabulous sauce. These are perfect to serve at a get-together with family and friends. Ingredients Baked Wings 4 pounds chicken wings 2 tsp kosher salt 1 tsp freshly ground black pepper ¼ tsp cayenne pepper 2 tsp garlic powder Beer Glaze 2 cloves garlic, crushed ¼ inch ginger, grated or 2 frozen ginger cubes ½ tsp red pepper flakes 1 cup stout beer ½ cup honey 1 TBS Dijon mustard 4 TBS tomato paste Preparation Preheat oven to 450°F. Line a baking sheet with parchment paper; set aside. In a large bowl, toss together wings, salt, black pepper, cayenne pepper, and garlic powder. Place wings in a single layer on prepared baking sheet. Bake for 1 hour,. Recipe from Perfect Flavors by Naomi Nachman shared with permission from Artscroll Mesorah Publications. Photo by Miriam Pascal.. 113 The Jewish Home | FEBRUARY 7, 2019 THE SCHECHTER FAMILY invites you to PESACH 2019 at the beautiful, oceanfront Leaders in Passover Tours THE SCHECHTER FAMILY MARCO POLO BEACH RESORT Join the Leaders in Passover Tours for our 24th Spectacular Year! FIRST CHILD FREE! FOR RESERVATIONS CALL CARIBBEAN KOSHER TOURS: 800-327-8165 marcopolopassover@gmail.com Call Joy of AMIT Children SOUTHEAST REGION 954-922-5100 203.510.3633 • Oversized newly renovated rooms, many with terraces • Poolside fitness center with sauna and steam room • Superb day camp program • Private seder rooms available • Gourmet glatt kosher cuisine with renowned Chef Andy Serano • Cantor led or private seders • Sumptuous Tea Room • Top name entertainment • Trips to major attractions • Daily services, shiurim, lectures • Scholar-in-Residence Program PESACH 2019 i DESIGNS Sunny Isles Beach, Florida invites you to 19201 COLLINS AVE • SUNNY ISLES BEACH, FLORIDA 26 114 OCTOBER 2015||The TheJewish JewishHome Home FEBRUARY29, 7, 2019 Notable Quotes “Say What?!” That’s a $2.3 billion drop in revenues. That’s as serious as a heart attack. This is worse than we had anticipated. This reduction must be addressed in this year’s budget. This is the most serious revenue shock the state has faced in many years. - New York Gov. Andrew Cuomo acknowledging this week that New York’s policy of constantly increasing taxes on the rich has caused many to leave the state and has led to lower tax revenue for the state. This is the flip side. Tax the rich, tax the rich, tax the rich. The rich leave, and now what do you do? -Ibid.. - Former NBC anchorman Tom Brokaw on “Meet the Press,” resulting in widespread condemnation from the media Tom Brokaw was long one of the most respected men in America. He anchored the “NBC Nightly News” for 22 years. He’s 78 years old now. He ought to be enjoying a happy retirement, fly fishing every morning. Instead, Tom Brokaw just made a terrible mistake – he expressed an unauthorized opinion in public. You can’t do that. During a live television show Brokaw said that assimilation is good and that immigrants should try to learn English. So great to watch & listen to all these people who write books & talk about my presidential campaign and so many others things related to winning, and how I should be doing “IT.” As I take it all in, I then sit back, look around, & say, “Gee, I’m in the White House, & they’re not!” - Tweet by President Trump after Chris Christie made the TV rounds and criticized him in order to sell his new book I know he’s working very hard to serve the best interest of the country. - Patriot’s owner Robert Kraft on “Fox & Friends,” talking about President Trump I ate Jeni’s ice cream and watched Netflix for three straight days. -New Orleans Saints head coach Sean Payton disclosing how he dealt with his team’s heartbreaking loss in the NFC Championship - Tucker Carlson, Fox News American Indian Pro-skateboarder Tony Hawk is launching his own fashion line that will include hoodies, T-shirts, flannels and carpenter pants. It’s great – if you love hearing your wife say, “No. Change.” — Seth Meyers - The race listed on a recently unearthed 1986 Texas State Bar registration card of Senator Elizabeth Warren, who according to a recent DNA test is 1/1064 Native-American MORE QUOTES The Jewish Home | FEBRUARY 7, 2019 115 116 FEBRUARY 7, 2019 | The Jewish Home The Jewish Home | OCTOBER 29, 2015 27. - From an article on liberal website Daily Beast, arguing that the “Patriots are the preferred team of white nationalists”. I’ve also been criticized for being a billionaire. Let’s talk about that. I’m self-made. I grew up in the project in Brooklyn, New York. I thought that was the American dream, the aspiration of America. You’re going to criticize me for being successful when in my company over the last 30 years, the only company in America that gave comprehensive health insurance, equity in the form of stock options, and free college tuition? And Elizabeth Warren wants to criticize me for being successful? - Former Starbucks CEO Howard Schultz, who is considering an Independent 2020 presidential bid, responding on MSNBC to Sen. Warren’s criticism of him just because he is a billionaire My son Benny, I don’t know if he’ll watch one play in the game, but the fact that he gets popcorn and a bunch of junk food is what I think he looks forward to. - Patriots quarterback Tom Brady, talking about his kids’ plans for the Super Bowl – Ibid. My body is broken beyond repair and it isn’t letting me have the final season I dreamed of. My body is screaming at me to stop, and it’s time for me to listen. - Champion skier Lindsey Vonn, upon announcing her retirement They can write whatever they want on their own cookie, and I can do that on mine. -Ken Bellingham, a Washington baker who made headlines for writing “Build That Wall” on one of his holiday-themed cookies MORE QUOTES The Jewish Home | FEBRUARY 7, 2019 Pesach PRESENTS 2019 T H E U P S C AL E E X P ER I E N C E AWA I T S Rancho Bernardo Inn SAN DIEGO, DENNIS PRAGER BEN SHAPIRO “A PEACEFUL HAVEN IN THE HEART OF THE MOUNTAINS.” Condé Nast Traveler Worlds Top 100 Resorts CALIFORNIA RABBI ELCHANAN SHOFF “EXTRAORDINARY WOULD BE THE ONLY WAY TO DESCRIBE IT. THE RESORT THAT HAS IT ALL.” RABBI CHANAN GORDON RABBI TZVI SYTNER SALVADOR LITVAK AMI HOROWITZ MUSICIAN SHLOMO KATZ JENNA MAIO FEATURING MOSHAV BAND W hite Oaks Resort N I A G A R A FA L L S , C A N A D A RABBI AARON SELEVAN BETHANY MANDEL SETH MANDEL MACCABEATS IN CONCERT UPSCALE-GETAWAYS.COM INFO@UPSCALE-GETAWAYS.COM 1.877.895.3210 MUSICIAN PINNY SCHACHTER 117 118 28 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home With the wind chill warnings, we simply cannot have any criminals putting themselves in harm’s way at those temperatures. Avoiding crime and criminal activities is especially important during periods of inclement weather. Also during all other times. - Green Bay Wisconsin Police Chief Andrew Smith announcing a ban on crime in Green Bay last week due to the -20°F temperatures with windchills of -50°F. - Virginia Governor Ralph Northam in a radio interview last week advocating for a law which would allow a mother to terminate a baby post-birth, further blurring the lines between abortion and murder I saw this as a psychotically incoherent speech with cookies and dog poop. – CNN’s Van Jones’s reaction to President Trump’s State of the Union Address BRUCE BACKMAN’S PESACH in the NORTHEAST MEISNER PESACH GETAWAY 2019 AT THE LUXURIOUS MARRIOTT WESTFIELDS RESORT Washington, DC Region CROWNE PLAZA STAMFORD, CONNECTICUT designinbrooklyn designinbrooklyn NON GEBROCHTS. CHOLOV YISROEL. SHMURA MATZAH. TEEN PROGRAMS. NIGHTLY ENTERAINMENT. ORGANIZED TRIPS & PAINTING EVENTS. MORE PRIVATE DINING. BUFFETS & VIENNESE. SCRUMPTIOUS TEA ROOM. 3 GOURMET MEALS DAILY. STIMULATING LECTURES. SUPERB DAY CAMP BY AVI DEVOR ACTIVITY GUY. HIGHEST RATED AND MOST TRUSTED PESACH PROGRAM IN THE NATION’S CAPITOL HOME TO THE MORA D’ASRA R’ FEITMAN SCHOLAR IN RESIDENCE R’ BOMRIND UNDER THE HASHGUCHA OF R’ SOMMERS “BEST FOOD IN THE BUSINESS” CHOLOV YISROEL GLATT KOSHER SHMURAH MATZOH NON-GEBROCHTS WITH R ABBI SIMON JACOBSON R ABBI ARI BERGMANN R ABBI SAM TAYLOR R ABBI SIMON TAYLOR R ABBI SHLOMO PILL R ABBI REUVEN FL AMER R ABBI MENDEL JACOBSON E NTE RTA I N M E NT BY GAD ELBA Z ROCK MISHPACHA SHIR SOUL BENSHIMON 8 TH DAY’S BENT ZI MARCUS ENTERTAINMENT JOINED BY MICHOEL PRUZANSKY MENDY J URI DAVIDI 347.554.1558 • 845.642.4455 Gil Troy Karen Paikin-Barrall Sanford Landa Rachel Pill Shaindy Jacobson Emma Taylor Shifra Klein Shlomo Klein F E AT U R E D A M E N ITI E S: Indoor & Outdoor Pools Tennis • Basketball Art and Photography Classes Spa Services • Golf Professional Children’s Programming RESERVE NOW 774.353.0170 PESACHINTHENORTHEAST.COM The Jewish Home | FEBRUARY 7, 2019 The Jewish Home | OCTOBER 29, 2015 119 29 120 4 FEBRUARY 7, 2019 OCTOBER 29, 2015||The TheJewish JewishHome Home Political Crossfire Diplomats Strive to Forget Fragile Peace in Afghanistan and Yemen By David Ignatius T he handmaiden of peace is exhaustion. We are seeing that lesson in the killing fields of Afghanistan and Yemen. ROOMS FILLING FAST - BOOK NOW! Take advantage of the weak Rand! ance War has a momentum that’s hard to stop, even when there’s a broad yearning to end a conflict. bal- last ceasefire on the Islamic State. Just as there’s a ladder of escalation in wars, there’s a ladder of de-escalation, too. In unwinding the Yemen conflict, U.N. mediator Martin Griffith began with a ceasefire agreement in the port city of Hodeidah; next, perhaps, he can open the road to the capital of Sanaa; then, maybe, a ceasefire at Sanaa airport; then an exchange of prisoners. Eventually the momentum of conflict slows, and problems begin to be solved in a “po- 121 5 5TJT GRAPHICS 5165690502 TheJewish JewishHome Home||OCTOBER FEBRUARY29, 7, 2019 2015 The PESACH IS ALL ABOUT AMBIANCE! FRIENDS, FAMILY Pesach 2019-5779 - Scholars In Residence - Decor by Rabbi Dr. Howard Apfel Rabbi Aryeh Lebowitz Rabbi Zev Meir Friedman Prof. Nechama Price Rebbi, Yeshivat HaKotel Pediatric Cardialogoist Rav Beis Haknesses of North Woodmere Rosh HaYeshiva Rambam Mesivta Senior Lecturer, Stern College Imu Shalev COO of Aleph Beta - Featuring - Naomi Nachman The Aussie Gourmet Yonatan Razel Singer/Songwriter Simcha Leiner Singing Sensation Ashley Blaker Comedian Gabriel Geller Wine Education, Royal Wine •A large number of oversized connecting guest rooms, free Wi-Fi gardens and ponds on 16 acres of property • Guest rooms outfitted with Hilton’s luxurious “Pillow Top Beds” • On-premises tennis, volleyball, basketball courts, and nearby golf course • Heated indoor pool and Jacuzzi • Spectacular tea room & nightly entertainment • Dynamic Scholar in Residence program • Daily shiurim and Daf Yomi • Warm and professional day camp and day care staff • Personal attention to every guest • Tranquil litical. (c) 2019, Washington Post Writers Group 122 6 FEBRUARY29, 7, 2019 OCTOBER 2015||The TheJewish JewishHome Home Political Crossfire Schultz is Calling Democrats Out By Marc A. Thiessen D emocratsfor af- filiated Schultz is a visionary entrepreneur who saw a latent demand for a $3 cup of coffee before anyone else.. (c) 2019, Washington Post Writers Group The Jewish Home | FEBRUARY 7, 2019 The Heller & Silverstein families present: ORLANDO, FL Private Pool Kashered Kitchen Day Camp Three Meals Daily Food Provided by Main Event Mauzone Caterers Star K Certified Chol Hamoed BBQs by Fuego Hosted by The Kosher Guru Ba'al Tefillah & Musical Entertainment by Shloime Dachs Daily Minyanim & Shiurim by Rabbi Leiby Burnham 1-855-PESACH-8 (855-737-2248) info@aperfectpesach.com 123 124 32 FEBRUARY 7, 2019 OCTOBER 29, 2015||The TheJewish JewishHome Home Forgotten Her es A Stormy History of Piracy on the Seas By Avi Heiligman A painting depicting the Quasi War An oil painting of Lt. Stephen Decatur boarding a Tripolitan gunboat during the bombardment of Tripoli in 1804 Somali pirates off the coast of Africa P iracy has been a problem since the ancient Greeks (if not earlier). It is defined as an act of robbery or violence from one ship to another seaborne vessel. Most of the time, piracy involves the raiding of commercial cargo ships for monetary gain or to claim the vessel as a prize. Wars involving pirates and countries disturbing American commercial shipping have been taking place for well over 200 years. Actions against pirates have made the U.S. Navy the strongest in the world, with skirmishes going back to the 18th century. Besides some relatively lowkey Indian Wars and armed pesky protesters, the American government had their hands full in the years be- tween the Revolutionary War and the War of 1812. France was not happy with the Americans owing them tons of money and not siding with them in their conflict with Great Britain. The French were in the midst of a civil war, and the Americans said that they owed money to the previous government. Beyond annoyed, France let private ships known as privateers attack American shipping vessels. Amounting to the same as piracy, privateers did damage to the fledgling American economy and so the U.S. entered into an undeclared conflict known as the Quasi War with them. There were several naval engagements with the Americans capturing some French ships and freeing American merchantmen and their vessels. The Retaliation was the only American ship captured and her commander, Lieutenant William Bainbridge, was able to secure the release of the ship and her sailors. The war lasted from 1798 to 1800 and ended with a treaty. The Americans were able to breathe easy and were free to roam the seas. According to the bibliography of the new Federalist Navy formed under President John Adams, “In the (Quasi) war, the navy proved itself an effective instrument of national policy.â€? The Quasi War gave the American government a legitimate reason to spend money on a navy and soon this navy would see a lot of action off the coast of North Afri- ca. The Barbary War (1801-1805) saw pirates from Algiers, Morocco, Tunisia and Tripoli continuously attacking American shipping along the Barbary Coast. The first merchant ship had been captured in 1784 and soon the Ottoman rulers in the region were demanding ransom money for safe passage. By 1797 the Americans were paying over a million dollars to the Barbary countries (under the overall but loose rule of the Ottoman Empire) but the pirate nations wanted more. In 1801, the pirates of Tripoli demanded that newly elected President Thomas Jefferson give them money for safe passage. Jefferson refused and sent four ships under Commodore Richard Dale to try to placate the pasha The Jewish Home | OCTOBER 29, 2015 TheJewish JewishHome Home||OCTOBER FEBRUARY29, 7, 2019 The 2015 of Tripoli. When this olive branch of peace was rebuffed, other American ships sailed to the region along with Swedish ships. Over the next four years, several battles ensued, and a peace treaty finally ended the hostilities in June 1805. Many early American naval commanders made a name for themselves in this war including Dale, Bainbridge and Stephan Decatur Jr. Ten years later, the Second Barbary War took place with Decatur capturing the Algerian flagship. Another treaty followed, and from then on, the Americans no longer paid tributes to the pirates. Piracy slowly ended in the Mediterranean Sea which had been plagued by pirates for two hundred years. The American Navy was now a force to be reckoned with, and although French ships continued to be the subject of piracy for the next twenty years, American ships were left alone. While American ships were now safe off the Barbary Coast, merchant ships were still being attacked by pirates in the early decades of the 19th century. From 1817-1825 the U.S. Navy fought a series of anti-piracy operations in the West Indies and the Gulf of Mexico. Jewish French pirate Jean Lafitte was operating in the Gulf of Mexico when the USS Enterprise was dispatched to chase him out of the gulf. They were successful, and in May 1821, Lafitte left. However, he continued pirating merchant ships and was captured off the coast of Cuba. Lafitte was eventually released but was killed in a subsequent battle with Spanish ships. In November 1822, the USS Alligator fought a large band of pirates off the coast of Cuba. The American sailors recovered three ships that had been seized but the pirates escaped. Thousands of miles away, in the Aegean Sea, Greek pirates were plundering American merchant ships. In 1825, President James Monroe sent several ships under Commodore John Rodgers to lead convoys of ships safely through the troubled waters. They had no engagements until October 1827, when the USS Warren captured a sixteen-gun brig and captured over Operation Ocean Shield was implemented to combat maritime piracy a dozen pirates. A couple of weeks later the USS Porpoise saw a British ship attacked by 250 pirates in five ships. The Americans gave chase, and a boarding party headed for the captured ship, the Comet. In the ensuing battle over 80 pirates were killed including eleven singlehandedly by a steward. The pirate leader was killed, and the British ship was saved without a single American casualty. In early 1828, the pirates’ home base was attacked by French and British ships, but by the end of the year piracy was no longer a threat in the Aegean Sea. Chinese pirates off of Hong Kong had been harassing Western ships for many years. In 1855, the U.S. and the British navies sent warships to rescue several merchant vessels taken by pirates. The Battle of Ty-ho Bay ended in a success as seven merchant ships were liberated during the battle. For the next 150 years, pirates, privateers and buccaneers posed little threat to American merchant ships. Piracy off the coast of North Africa has been well-documented. Starting around the year 2000, international merchant ships have been attacked in and around the Gulf of Aden. Operation Enduring Freedom – Horn of Africa’s Operation Ocean Shield – was NATO’s response to piracy in the Indian Ocean, Guardafui Channel, Gulf of Aden, and Arabian Sea. The U.S. Navy was a major participant and in April 2009 initiated the rescue effort on the hijacked Maersk Alabama. This was the first time in close to two centuries that an American flagged merchant vessel had been pirated. Navy SEALs op- 33 125 33 erating from the USS Bainbridge (named after the same officer who countered pirates some 200 years earlier) successfully killed three Somali pirates and rescued the captain of the Maersk Alabama. In at least four other incidents, Somali pirates tried to attack U.S. Navy ships and, needless to say, were beaten quite badly in each case. Piracy still exists today but attacks have decreased since the early 1800s. Modern ships, technology, and updated operation tactics have made it extremely difficult for pirates to operate and receive ransom money. The U.S. military has been closely monitoring pirates since the navy’s inception, and in many respects its anti-piracy operation helped the U.S. Navy gain world recognition. Avi Heiligman is a weekly contributor to The Jewish Home. He welcomes your comments and suggestions for future columns and can be reached at aviheiligman@gmail.com. THE MANDEL FAMILY WELCOMES YOU TO BACK AGAIN IN THE BEAUTIFUL POCONO MOUNTAINS Just 2 hours from NY our 14th exciting year PESACH 2019 at the POCONO MANOR RESORT AND SPA Activities - Indoor Pool - Indoor Tennis - Exclusive Spa - Horseback Riding - Golf - Archery - Basketball - Fishing - And Much More! - RABBI MOSHE MEIR WEISS RABBI ELIEZER ABISH RABBI YEHOSHUA JURAVEL RABBI ASHER STERN MICHOEL PRUZANSKY THE SHNITZEL GUYS ~ Day camp run by Rabbi Shlomo Hyman ~ Rabbi Avi Juravel, Rav ~ Entire hotel Kosher L’Pesach ~ Daf Yomi, many Shiurim and lectures ~ Infant day care and babysitting ~ Amazing teen program ~ Jugglers, clowns, animal shows, and more find us on Chasidishe Shechita, Cholov Yisroel, Non Gebroks, Hand Shmurah Matzo 732.370.7777 Looking forward to greeting you personally. Your hosts, The Mandel Family 24 126 OCTOBER 29, 2015||The TheJewish JewishHome Home FEBRUARY 7, 2019 Tribe Tech Review Muting Your iPhone When Entering Shul By Dov Pavel I n the last article in this series, we discussed the two issues we all face regarding tefillah and smartphones. you or home at TribeTechReview.branding. How- ever, added as a contact. You will also need to have IOS 12 or greater installed. After installing the Shortcuts App, download the Shortcut “Mute iPhone for Tefillah” from my blog at TribeTechreview.com. Then open the Shortcut from your Library by clicking on the three dots “…” to edit the Shortcut. You will see that the Shortcut turns on the Do Not Disturb mode until you leave. With this screen open (that’s important!) speak to Siri and say these magic words: “Hey Siri, When I arrive at shul,, 25 The 2015 TheJewish JewishHome Home||OCTOBER FEBRUARY29, 7, 2019 25 127 to remind people to stop talking in shul. Dov Pavel is a tech enthusiast who reviews personal technology and home automation through the lens of a shomer Shabbos consumer. He is not affiliated with any of the companies whose products he reviews and the opinions he expresses are solely his own. Dov is not a halachic authority and readers should consult their own rabbi as needed. Dov lives in Teaneck with his wife and three children. Previous articles an be found at TribeTechReview.com. Follow @TribeTechReview on Facebook, Twitter and LinkedIn. • ar ye ing our 25th rat eb WORLD WIDE KOSHER ri tp st C o ast L owe s c ed lu x u We •C el The Jewish Home | OCTOBER 29, 2015 r y p r og r a m o n the World Wide Kosher proudly presents Passover 2019 Scottsdale, Arizona Situated on 40 lush acres of magnificently landscaped grounds. Get Ready to bask in the warm sunshine! April 19-28, 2019 We proudly offer you a magnificent full destination resort for a truly TAKE ADVANTAGE OF OUR EARLY BIRD 25TH YEAR ANNIVERSARY SPECIAL AND RESERVE BEFORE JANUARY 15, 2019 Five Star • Five Diamond Passover Experience Scottsdale Plaza Resort and Spa Scottsdale, Arizona Trip Adviser Certificate of Excellence AZ Meetings and Events-Top 10 Spa Resort AZ Meetings and Events-Best New Renovations Under the strictest orthodox Rabbinic Supervision! 323.525.0015 • • • • • TOP NAME ENTERTAINMENT INSPIRING LECTURES AND SHIURIM ASHKENAZ AND SEPHARDIC MINYANIM STATE OF THE ART FITNESS CENTER AND SPA OUTSTANDING TODDLER CAMP, DAY CAMP & TEEN PROGRAMS • GOURMET GLATT KOSHER CUISINE, LAVISH TEA ROOMS POOLSIDE BARBEQUES • POOLS, CHAMPIONSHIP GOLF, TENNIS, LAKES & A VAST ARRAY OF WATER ACTIVITIES • AND MUCH MORE Why settle for a room, when you can have a luxurious suite for the same price? 128 FEBRUARY 7, 2019 | The Jewish Home The Jewish Home | OCTOBER 29, 2015 7 Good Hum r Charmingly Chopped By Jon Kranz M any Jews love chopped liver. In fact, chopped liver is so popular that they also make a widely-available vegetarian version for the less carnivorous among us. That is rather astounding when you consider that few other meat dishes are routinely and widely sold in a vegetarian version. You do not often see vegetarian brisket, vegetarian roast beef or vegetarian helzel (stuffed chicken neck skin). Chopped liver, in its meat form, normally consists of liver from a cow or chicken that has been broiled or sautéed with schmaltz (rendered chicken or goose fat). It is then mixed together with other ingredients – usually onions, eggs, salt and pepper – so that the consumer cannot actually tell that he or she is eating liver. Then again, it’s sort of impossible to avoid that fact given the dish’s overt, in-your-face and completely unapologetic name: chopped liver. A compelling argument could be made that more people would eat chopped liver if it had a more appealing and appetizing name or at least a softer and subtler one. For example, instead of calling it chopped liver, we could refer to it as “awesome organ,” “excellent entrails,” “incomparable innards” or “great guts.” Of course, an equally compelling argument could be made that a name that does not clearly refer to “liver” would be misleading and possibly false advertising, especially for an innards initiate or a guts greenhorn. Even if that were the case, however, there arguably are far better liver-based names than chopped liver. For example, the average consumer might be prefer monikers such a “loveable liver,” “luscious liver,” or “legendary liver,” you get my point. Some purists might argue that it is equally important to convey that the liver has been chopped. Even if it is necessary to do so, isn’t there a less harsh way to describe the preparation? Couldn’t we call it something softer-sounding like comes to liver presentation, liverwurst is the best. In the interest of completeness, we also must mention foie gras, a fancy French dish that involves liver of a specially fattened goose or duck. The strange thing is that foie gras and chopped liver have an awful lot in common yet only one of them is considered a luxury item. If foie gras is for the prince, then chopper liver is for the peasant. If foie gras is for the refined, chopped liver is for the rough. If foie gras is for the If foie gras is for the nouveau riche, chopped liver is for the nudnik. “minced liver,” “diced liver” or “refined liver.” Aren’t all of these options better than the blatant and flagrant “chopped liver”? For the record, chopped liver is not the only form of liver consumed around the world. Many liver-eaters also enjoy liver steaks and onions, sautéed chicken livers, and liverwurst. No, the latter does not refer to the worst type of liver imaginable. Liverwurst refers to liver that comes in sausage form, often available either as a hard or soft delicacy. In fact, one could argue that when it nouveau riche, chopped liver is for the nudnik. Regardless of the form, liver must undergo a special process before it is deemed kosher. The liver must hail from a kosher animal that has been properly shechted (slaughtered), the gall bladder and all fats must be removed, and all blood must be extracted. Thus, producing a kosher piece of liver is not so easy to de“liver.” As for the blood extraction, the typical soaking and salting method will not suffice because the liver contains higher levels of blood. As a result, it must be broiled to ensure that all blood is effectively cooked out. This must be done to every s“liver” of liver. I know what you’re thinking. How did chopped liver find its way into one of the most confrontational and insecure retorts of all time, i.e., “What am I, chopped liver?” Some say that the expression is based on the fact that liver is not always the most sought-after and valued item. Others explain that chopped liver often is served as an appetizer or side dish and never as the main attraction. For these reasons and possibly others, chopped liver developed a rather pronounced insecurity, growing more self-conscious and self-loathing with every serving. Of course, there are many other things in life that sometimes feel slighted and thus a similar “What am I,…?” expression could apply to them too. For example: When it comes to the High Holidays: “What am I, Chol Hamoed?” When it comes to synagogue: “What am I, a past president?” When it comes to Passover: “What am I, Pesach Sheini?” When it comes to baked goods: “What am I, dietetic kichel?” Final thought: What do you call a person who delivers chopped liver in his car? A chopped livery driver. Jon Kranz is an attorney living in Englewood, New Jersey. Send any comments, questions or insults to jkranz285@ gmail.com. The Jewish Home | FEBRUARY 7, 2019 Pesach Splendor INDULGE YOURSELF AT THE SPLENDID ALPEN KARAWANSERI SKI-SPA RESORT THIS PESACH! The Salzach River welcomes you to a Pesach of true leisure, where your whole body will relax in comfort and serenity. GOURMET CUISINE - Glatt Kosher, non gebrokt, non kitnios, hand matzos - Meals by world renowned chefs - 24 hour tea room ENTERTAINMENT - מקוה טהרה,בית המדרש - Activities & organized outings - Kidsclub, exciting shiurim and evening entertainment for all ages - Organised Ski OASIS OF RELAXATION - 2 Heated pools, indoor & outdoor - 2500M² Spa & Wellness Center, - 8 different saunas & relaxing oasis - Fitness studio with Techno Gym 129 130 FEBRUARY 7, 2019 | The Jewish Home The Jewish Home | OCTOBER 29, 2015 Your 15 Money It’s All Chinese to Me By Allan Rolnick, CPA Oing their kidneys. Who’s to say the Chinese aren’t onto. Then there are “behavioral” taxes that include a vehicle and vessel use tax, a license-plate tax, a slaughter tax, and a banquet tax.? That’s food for thought for you as you look through your own taxes and prepare for April 15. | FEBRUARY 7, 2019 131 Classifieds classifieds@fivetownsjewishhome.com • text 443-929-4003 SERVICES SERVICES SERVICES Yoga & Licensed Massage Therapy Peaceful Presence Studio 436 Central Avenue, Cedarhurst Separate men/women Group/private sessions Gift Cards Available www. Peacefulpresence.com 516 -371 -3715 LEAH’S BEAUTY CONCEPTS Leah Sperber, Specializing in Laser Hair Removal, & Electrolysis. Using the latest Innovative technology, for best results, & painless treatments. Makeup for all Simchas, & Facial treatments. Call for appointment 917-771-7329 WOOD REVAMPING WE REVAMP CABINETS, DOORS, STAIRCASES AND FURNITURE. Give your house a modern face-lift without detecting it in your pocket. Commercial/Residential/Shuls Phone: (212)-991-8548 Email: woodrevamping@gmail.com Avi Dubin Licensed Real Estate Salesperson C: (516) 343-6891 | O: (516) 997-9000 adubin413@gmail.com woodmere cedarhurst price reduced 4bdrm, 3full updted bthrms, lr/dr w/ deck, EIK, SS appl, lg playrm, new roof, gas furnace, hot wtr tnk, CAC, Anderson wind, low taxes, walk to 12+ shuls, 3 skylites, 2 car grge, 2500ft2, SD#15. Edward Ave $699K Exp 5 BR ranch, 60x100 lot. Grnte/wd EIK, SS appl. Hdwd flrs, natural light. Lg closets, walk in attic, built-in shelving in hall, attached grg, deck. 2 br upstairs diviseable, well kept bckyd. New win, new bthrm, walk to shuls. Location!! $699K woodmere hewlett Under Contract! sold 4 br spl, 2 fl bth, EIK, oasis in bkyd, CAC, skylights, new boiler, hot water tnk, fl fin bsmt, new o/s W/D, pool, ing sprinklers. Porch, deck, updtd elctric panel, SD#15, low taxes, ABC blocks, walking distance to many houses of worship. $759K A 4BR- 3 full TH multi-level split on a quiet cul-de-sac with a circular driveway and a Gunite pool. This house features CAC, gas heating, plenty of living space, Dr, LR, MBR suite with full bath and Whirlpool. $699K woodmere woodmere Under Contract! Exp ranch on an oversized prop. 5br 3 full bath, EIK, formal DR, large den and fullsized basement with a huge recreation room, CAC and Gas Heat. Living space is 2400 sqft. 6 zone sprinklers, fireplace, huge 2 car attached garage $699K Under Contract! 4br s/h col, country feel, lg prop w/ low taxes. 2.5bths, hdwd flrs, new roof, siding, win, new boilr, ht wtr tnk. Gas heating, CAC, new bth/br, lg fin bsmnt, fenced yd, ing sprinklrs, fpl, 2car grge, patio, skylites, frnt prch, alrms, location! $839K HOUSES FOR SALE CAN’T AFFORD YOUR PROPERTY TAXES? MORTGAGE? Must sell for any reason? Call for FREE Consultation. Call now 212-470-3856 Cash buyers available! HOUSES FOR SALE BAYSWATER 4 Bedrooms, 3 bathrooms, Kosher Kitchen, DR/LR, Closets, Porch Call 516-206-2005 for more info GoingRealty@gmail.com PRICE REDUCED: Sprawling 4BR, 4BA Exp-Ranch, Oversized Rooms, LR W/Fplc, Formal Dining Rm, Large Den, Master Suite, Full Finished Basement, Storage Room & Office, Deck, Fabulous Property…$1.078M Call Carol Braunstein (516) 295-3000 WOODMERE $479,999 BRAND NEW LISTING. Best price/Best value in town. Renovated, SS/granite EIK, with adjacent family room/den, enc front porch, welcoming entry foyer, ample FLR/FDR. Four bedrooms on upper level and full, high, dry basement. Walk All location, including LIRR/GG/Lib. C Slansky, Broker 516-655-3636 NORTH WOODMERE Beautiful spacious 4 bedroom colonial, finished basement, in ground pool, close to all. $879,000 Call 516-924-2971 132 FEBRUARY 7, 2019 | The Jewish Home Classifieds HOUSES FOR SALE COMMERCIAL RE COMMERCIAL RE APT FOR RENT EAST ROCKAWAY: Retail Stores on Busy Corner, 1000SF& Up Available, Great High Visibility Location, For Lease… Call for More Details Broker (516) 792-6698 WOODMERE: BEST BUY Spacious 2BR Apartment, Washer/Dryer In Bldg, Elevator Bldg, Open Floor Plan, 1st Floor, Close To All...$199K Call Carol Braunstein (516) 295-3000 FOR RENT BY OWNER NO BROKERAGE INVOLVED. Beautiful, spacious 3 bedroom 2 bathroom, 2nd floor apt. for rent. Newly renevated, brand new stainless steel appliances, washer-dryer hookup. Located in Far Rockaway near many shuls/yeshivas. Near LIRR. For all inquiries, please call (718)-327-7889. COMMERCIAL RE 5 TOWNS: LOOKING FOR: Restaurateurs & Professionals!!! Orthoptists, Podiatrists, Chiropractors, Physical Therapists, Dentists, or Obstetrician/Gynecologists. Spaces Available in Cedarhurst, Hewlett, Lynbrook, Rockville Centre, Valley Stream area. For Lease... Call for More Details Broker (516) 792-6698 ROCKVILLE CENTRE Light Warehousing/Flex office space 8150 S/F - Built in Offices with Large Windows - 11' Ceiling clearance Indoor Loading Dock. Ideal Location / Walk To LIRR & Bus - Bank, Shopping, City Center. 917-822-0499 CEDARHURST 500-3,500 +/- SF Beautiful, newly renovated space for rent. Ideal for Retail or Executive offices. Prime location. Convenient Parking. Sam @516-612-2433 or 718-747-8080 INWOOD 10,000 sq ft brick building. Offices and warehouse. High ceilings. Asking $16/foot. Owner: 516-206-1100 mark@mbequitygroup.com ROCKVILLE CENTRE Flex Office Space / Light Warehousing 3650 S/F - Ready for move in. Competitively priced Ideal Location / Walk To LIRR & bus Bank, Shopping, City Center. 917-822-0499 INWOOD OFFICE SPACE LOWEST PRICES IN TOWN! 500-7000 Square feet gorgeous office space with WATERVIEW in Inwood! Lots of options. Tons of parking. Will divide and customize space for your needs! Call 516-567-0100 SF MEDICAL OFFICE SPACE Available, Reception Area, Waiting Room, Kitchenette, 2 Consult, 4 Exam Rooms, 2 Bathrooms, 30 Car On-Site Parking, For Lease … Call Ian for More Details (516) 295-3000 APT FOR RENT BAYSWATER FOR RENT 3 bedrooms, 2 bathrooms, Kosher kitchen, DR/LR, Closets, driveway, Close to all GoingRealty@gmail.com APT FOR RENT: FAR ROCKAWAY 2 BEDROOM APT IN PRIVATE HOUSE 2nd Floor /New Kitchen 2 SS sinks, New Bath, Washer/Dryer New Floors, Newly Painted call 347-753-1199 FAR ROCKAWAY BASEMENT ROOM FOR RENT IDEAL FOR DORM OR OFFICE 718-327-8007 Classifieds HELP WANTED HELP WANTED DRIVER FOR QUEENS DRY CLEANER ROUTE. Options to drive Tuesday am/ Thursday pm. Also hours available Monday am , Tue am and pm, Wed am and pm and Friday pm. Must have own car. Use of company van part time. Competitive salary. Contact Marc for info 917-612-2300 LOWER MANHATTAN ORTHODOX NONPROFIT SEEKS ACCOUNTING DIRECTOR Some public accounting experience a preferred, 5-7 years of experience, nonprofit experience a plus. Email resumes to renee@ou.org TEACH NYS/ORTHODOX UNION seeking LI Engagement Associate. Responsibilities: cultivating relationships, political activity, development, and event management across the Jewish Communities on LI on issues effecting Day School affordability through political advocacy. Contact watmana@ou.org. YESHIVA DARCHEI TORAH MIDDLE SCHOOL is hiring secular studies teachers for the Fall semester in all secular subjects; excellent working environment and salary; Monday-Thursday, 2:30-5:30 PM. Interviews being held now. Candidates should have prior teaching experience. Please send resume to mhorowitz@darchei.org Hebrew Academy of Long Beach, Woodmere, NY is seeking the following Maternity Leave positions: Rebbe or Morah for grades 6-8 Tanach and Halacha (PT). Resumes to: ulubetski@halb.org Seeking full time PHYSICAL THERAPIST for Special Education school located in Brooklyn. Experienced preferred. Competitive salary. Room for growth. resumes@yadyisroelschool.org SHOMER SHABBOS WOODMERE OFFICE LOOKING FOR A MATURE FULL-TIME SECRETARY. Computer knowledge (Word Perfect, Excel, QuickBooks, etc...) and communication skills a must Please email resume to info@ UHCofNY.org OFFICE MANAGER Do you have good organizational skills? Office Manager position available at local school. Responsibilities: work with vendors, coordinate staff schedules, manage schedules, etc. Must have good computer and communication skills. Great pay and work environment. Email resume to manager5towns@gmail.com ASSISTANTS NEEDED FOR ELEMENTARY SCHOOL, AFTERNOON SESSION. Email: fivetownseducators@gmail.com F/T & P/T REGISTERED NURSE openings to work with adults who have developmental disabilities within residential settings in Brooklyn, Manhattan, or Long Island. Current NYS RN, min 2 years hospital experience. OHEL: 855-OHEL JOB BAIS YAAKOV IN FAR ROCKAWAY SEEKING FIFTH GRADE LIMUDEI KODESH TEACHER to start immediately. Please email resume to teachingpositions1@gmail.com. Due to continued growth, the Yeshiva of South Shore is seeking Elementary School Teachers. Cert/Exp required. Please forward resume to monika@yoss.org Seeking full time OCCUPATIONAL THERAPIST for Special Education school located in Brooklyn. Experienced preferred. Competitive salary. Room for growth. resumes@yadyisroelschool DenaFriedmanGraphics The Jewish Home | FEBRUARY 7, 2019 133 165 NORTH V I L L AG E AVENUE ROCKVILLE CENTRE, NY ▷ Daily Mincha Minyan (Maariv) ▷ 10 minutes from Five Towns and West Hempstead ▷ Unlimited parking ▷ Five minute walk to LIRR station (30 minute direct train to and from Penn Station) ▷ Labcorp, Quest, and Sunrise medical labs in building ▷ Pharmacy/Convenience Store in building ▷ Centrally located between Sunrise Highway, Southern State Parkway and Peninsula Boulevard ▷ Multiple suite sizes available; build to suit EXECUTIVE SUITES COMING SOON- RESERVE YOUR SPACE NOW For more information contact: 212.686.5681 x 4201 sharon@rhodesny.com MILLER COMMERCIAL 680CENTRAL 5X3.qxp_2018 11/26/18 3:32 PM Page 1 134 42 FEBRUARY 7, 2019 | The Jewish Home OCTOBER 29, 2015 | The Jewish Home Life C ach Confusing Messages By Rivki D. Rosenwald Esq., MFT, CLC W e often wonder why we are not getting through to our kids. Are we just not speaking their language or are we sending mixed signals? Well, I’ve been wondering if we are sending them confusing messages. People these days often want understanding from their kids. They state, “It’s difficult for us because we are part of the sandwich generation.” Then they go out and relish eating tons of salads and fruits and rarely a sandwich. How’s a kid to understand their parents are talking about being pulled by the generation above and below? After all, correct me if I’m wrong, but isn’t a sandwich a food? Perhaps we need to try to communicate somewhat better at what we are trying to communicate. Parents often say to their kids, “You should know better than to behave like that.” Yet, then they proceed to lose it with their kid. So where exactly should they know to behave better from? picture? We want to know where our kids are every minute. We even track their locations. And then we wonder why our kids aren’t more independent?! We assert, “Speak to me more re- Are we part of the solution or the problem? We tell our kids, “Stop with your phones already,” though, it’s usually while we’re asking to borrow a charger or whether anyone saw where we put our phone. Can you see how they might see something off about that spectfully!” Are we role modeling that to them? We demand, “Don’t text and drive!” Are WE? Do we resent when our children communicate with unthought-out re- active behaviors and comments and then calmly demonstrate not being reactive? Or do we overreact to their overreaction? Are we doing what we’re asking? Are we sending unclear signals? Are we part of the solution or the problem? Think: do I want to be a successful sandwich bond? Do I want to connect with my bread the right way? Then I better be certain I’m taking the time to role model what I’m asking of that part of the sandwich. That way, the sandwich we are all part of will be ohso tasty and delicious. Rivki Rosenwald is a certified relationship counselor, and career and life coach. She can be contacted at 917-705-2004 or rivki@ rosenwalds.com. The Jewish Home | FEBRUARY 7, 2019 The Jewish Home | OCTOBER 29, 2015 135 43 136 FEBRUARY 7, 2019 | The Jewish Home Five Towns Jewish Home - 2-7-19
https://issuu.com/jewishhome/docs/currentissue_c286a87c9ee85b?e=0
CC-MAIN-2019-09
refinedweb
51,810
60.55
#pragma once #include <string> class MyString { private: char* buffer; public: MyString(const char* initialInput) { if(initialInput != NULL) { buffer = new char[strlen(initialInput) + 1]; buffer = (char*)initialInput; } else { buffer = NULL; } } ~MyString() { if(buffer != NULL) delete[] buffer; } const char* getString() { return buffer; } int getLength() { return strlen(buffer); } }; Main.cpp #include <iostream> #include "MyString.h" int main() { MyString text("Nom nom nom!"); std::cout << "The text stored in text reads: " << text.getString() << std::endl; std::cout << "The length of the text stored in text is: " << text.getLength() << std:: return 0; } When I run this code and the destructor is called, I get a debugger error message: I don't understand it, I tested the destructor and checked what the value in the buffer is and it says the right value, so it's obviously not NULL. Why can I not delete a pointer which isn't a nullpointer?
http://www.dreamincode.net/forums/topic/322747-cant-delete-pointer-which-isnt-even-pointing-to-null/
CC-MAIN-2017-47
refinedweb
145
65.62
This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Carlos O'Donell wrote: > This is a cleaner HPPA linuxthreads implementation that stems from the > work that John David Anglin <dave.anglin@nrc-cnrc.gc.ca> and myself did > to devise a self-aligning lock system that doesn't impose the 16-byte > lock alignment restriction. The indentation and general style is wrong in many places. > diff -urN glibc-2.3.1.orig/linuxthreads/sysdeps/hppa/pspinlock.c glibc-2.3.1/linuxthreads/sysdeps/hppa/pspinlock.c > --- glibc-2.3.1.orig/linuxthreads/sysdeps/hppa/pspinlock.c 2002-08-26 18:39:51.000000000 -0400 > +++ glibc-2.3.1/linuxthreads/sysdeps/hppa/pspinlock.c 2003-01-15 18:26:51.000000000 -0500 > @@ -24,15 +24,12 @@ > int > __pthread_spin_lock (pthread_spinlock_t *lock) > { > - unsigned int val; > + unsigned int *addr = __ldcw_align (lock); > + > + while (__ldcw (addr) == 0) > + while (*addr == 0) ; This is plain wrong. addr at least must be volatile. And I don't understand why you removed the asm code. These pieces of code are prime candidates for hand-coding. > +static inline struct _pthread_descr_struct * __get_cr27(void) > +{ > + long cr27; > + asm("mfctl %%cr27, %0" : "=r" (cr27) : ); > + return (struct _pthread_descr_struct *) cr27; > +} Not a real problem, but you should get gcc to recognize this reqister and perform the loading. - -- - --------------. ,-. 444 Castro Street Ulrich Drepper \ ,-----------------' \ Mountain View, CA 94041 USA Red Hat `--' drepper at redhat.com `--------------------------- -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE+y74l2ijCOnn/RHQRAvZ1AKCkAhua5qA19EDylfBz2Zhp8dROIACfTrwO 9Slpj32aUy92PiggDNexZBE= =6Ilf -----END PGP SIGNATURE-----
http://sourceware.org/ml/libc-alpha/2003-05/msg00198.html
CC-MAIN-2020-05
refinedweb
259
51.65
Hi I have written a class library(dll), which is used in a site that I am creating. The same dll will be used in many of our sites and is universal for all the sites. So, I was thinking of centralizing my dll. I think putting the DLL in the GAC would be a good solution, and all the other sites will use this dll. But how do I do this? I tried doing the snk stuff, but I was not able to manage it. So, how do I change my dll code, so that it can be added into the GAC. Also, I would like to know how to access the same dll from GAC then? Will it work the same way, as when using the physical dll earlier? I mean, will all the namespaces be available like the same as earlier? Apart from GAC, if there is anyother solution please let me know. Thanks CodeNameVirus Forum Rules
http://www.antionline.com/showthread.php?277707-Creating-DLL-for-adding-in-GAC&p=943762&mode=linear
CC-MAIN-2016-36
refinedweb
160
91.82
NFC Data Exchange Format¶ NDEF (NFC Data Exchange Format) is a binary message format to exchange application-defined payloads between NFC Forum Devices or to store payloads on an NFC Forum Tag. A payload is described by a type, a length and an optional identifer encoded in an NDEF record structure. An NDEF message is a sequence of NDEF records with a begin marker in the first and an end marker in the last record. NDEF decoding and encoding is provided by the nfc.ndef module. >>> import nfc.ndef Parsing NDEF¶ An nfc.ndef.Message class can be initialized with an NDEF message octet string to parse that data into the sequence of NDEF records framed by the begin and end marker of the first and last record. Each NDEF record is represented by an nfc.ndef.Record object accessible through indexing or iteration over the nfc.ndef.Message object. >>> import nfc.ndef >>> message = nfc.ndef.Message(b'\xD1\x01\x0ET\x02enHello World') >>> message nfc.ndef.Message([nfc.ndef.Record('urn:nfc:wkt:T', '', '\x02enHello World')]) >>> len(message) 1 >>> message[0] nfc.ndef.Record('urn:nfc:wkt:T', '', '\x02enHello World') >>> for record in message: >>> record.type, record.name, record.data >>> ('urn:nfc:wkt:T', '', '\x02enHello World') An NDEF record carries three parameters for describing its payload: the payload length, the payload type, and an optional payload identifier. The nfc.ndef.Record.data attribute provides access to the payload and the payload length is obtained by len(). The nfc.ndef.Record.name attribute holds the payload identifier and is an empty string if no identifer was present in the NDEF date. The nfc.ndef.Record.type identifies the type of the payload as a combination of the NDEF Type Name Format (TNF) field and the type name itself. Empty (TNF 0) An Empty record type (expressed as a zero-length string) indicates that there is no type or payload associated with this record. Encoding a record of this type will exclude the name (payload identifier) and data (payload) contents. This type can be used whenever an empty record is needed; for example, to terminate an NDEF message in cases where there is no payload defined by the user application. NFC Forum Well Known Type (TNF 1) An NFC Forum Well Known Type is a URN as defined by RFC 2141, with the namespace identifier (NID) “nfc”. The Namespace Specific String (NSS) of the NFC Well Known Type URN is prefixed with “wkt:”. When encoded in an NDEF message, the Well Known Type is written as a relative-URI construct (cf. RFC 3986), omitting the NID and the “wkt:” -prefix. For example, the type “urn:nfc:wkt:T” will be encoded as TNF 1, TYPE “T”. Media-type as defined in RFC 2046 (TNF 2) A media-type follows the media-type BNF construct defined by RFC 2046. Records that carry a payload with an existing, registered media type should use this record type. Note that the record type indicates the type of the payload; it does not refer to a MIME message that contains an entity of the given type. For example, the media type ‘image/jpeg’ indicates that the payload is an image in JPEG format using JFIF encoding as defined by RFC 2046. Absolute URI as defined in RFC 3986 (TNF 3) An absolute-URI follows the absolute-URI BNF construct defined by RFC 3986. This type can be used for message types that are defined by URIs. For example, records that carry a payload with an XML-based message type may use the XML namespace identifier of the root element as the record type, like a SOAP/1.1 message may be represented by the URI ‘’. NFC Forum External Type (TNF 4) An NFC Forum External Type is a URN as defined by RFC 2141, with the namespace identifier (NID) “nfc”. The Namespace Specific String (NSS) of the NFC Well Known Type URN is prefixed with “ext:”. When encoded in an NDEF message, the External Type is written as a relative-URI construct (cf. RFC 3986), omitting the NID and the “ext:” -prefix. For example, the type “urn:nfc:ext:nfcpy.org:T” will be encoded as TNF 4, TYPE “nfcpy.org:T”. Unknown (TNF 5) An Unknown record type (expressed by the string “unknown”) indicates that the type of the payload is unknown, similar to the “application/octet-stream” media type. Unchanged (TNF 6) An Unchanged record type (expressed by the string “unchanged”) is used in middle record chunks and the terminating record chunk used in chunked payloads. This type is not allowed in any other record. >>> import nfc.ndef >>> message = nfc.ndef.Message('\xD0\x00\x00') >>> nfc.ndef.Message('\xD0\x00\x00')[0].type '' >>> nfc.ndef.Message('\xD1\x01\x00T')[0].type 'urn:nfc:wkt:T' >>> nfc.ndef.Message('\xD2\x0A\x00text/plain')[0].type 'text/plain' >>> nfc.ndef.Message('\xD3\x16\x00')[0].type '' >>> nfc.ndef.Message('\xD4\x10\x00example.org:Text')[0].type 'urn:nfc:ext:example.org:Text' >>> nfc.ndef.Message('\xD5\x00\x00')[0].type 'unknown' >>> nfc.ndef.Message('\xD6\x00\x00')[0].type 'unchanged' The type and name of the first record, by convention, provide the processing context and identification not only for the first record but for the whole NDEF message. The nfc.ndef.Message.type and nfc.ndef.Message.name attributes map to the type and name attributes of the first record in the message. >>> message = nfc.ndef.Message(b'\xD1\x01\x0ET\x02enHello World') >>> message.type, message.name ('urn:nfc:wkt:T', '') If invalid or insufficient data is provided to the NDEF message parser, an nfc.ndef.FormatError or nfc.ndef.LengthError is raised. >>> try: nfc.ndef.Message('\xD0\x01\x00') ... except nfc.ndef.LengthError as e: print e ... insufficient data to parse >>> try: nfc.ndef.Message('\xD0\x01\x00T') ... except nfc.ndef.FormatError as e: print e ... ndef type name format 0 doesn't allow a type string Creating NDEF¶ An nfc.ndef.Record class can be initialized with an NDEF To build NDEF messages use the nfc.ndef.Record class to create records and instantiate an nfc.ndef.Message object with the records as arguments. >>> import nfc.ndef >>> record1 = nfc.ndef.Record("urn:nfc:wkt:T", "id1", "\x02enHello World!") >>> record2 = nfc.ndef.Record("urn:nfc:wkt:T", "id2", "\x02deHallo Welt!") >>> message = nfc.ndef.Message(record1, record2) The nfc.ndef.Message class also accepts a list of records as a single argument and it is possible to nfc.ndef.Message.append() records or nfc.ndef.Message.extend() a message with a list of records. >>> message = nfc.ndef.Message() >>> message.append(record1) >>> message.extend([record2, record3]) The serialized form of an nfc.ndef.Message object is produced with str(). >>> message = nfc.ndef.Message(record1, record2) >>> str(message) '\x99\x01\x0f\x03Tid1\x02enHello World!Y\x01\x0e\x03Tid2\x02deHallo Welt!' Specific Records¶ Text Record¶ >>> import nfc.ndef >>> record = nfc.ndef.TextRecord("Hello World!") >>> print record.pretty() text = Hello World! language = en encoding = UTF-8 Uri Record¶ >>> import nfc.ndef >>> record = nfc.ndef.UriRecord("") >>> print record.pretty() uri = Smart Poster Record¶ >>> import nfc.ndef >>>>> record = nfc.ndef.SmartPosterRecord(uri) >>> record.>> record.title['de'] = "Python Modul für Nahfeldkommunikation" >>> print record.pretty() resource = title[de] = Python Modul für Nahfeldkommunikation title[en] = Python module for near field communication action = default
https://nfcpy.readthedocs.io/en/latest/topics/ndef.html
CC-MAIN-2018-13
refinedweb
1,217
50.53
Are you sure? This action might not be possible to undo. Are you sure you want to continue? VERIFICATION OF BANKING TRANSACTIONS Submitted to PUNJAB TECHNICAL UNIVERSITY, JALANDHAR Submitted in the partial fulfillment of the Degree requirement towards the Submitted By: Anu bala MBA –IIIrd Sem. Roll No. 100112243231 Submitted To: SESSION (2010-2012) BHAI GURDAS INSTITUTE OF ENGINEERING & TECHNOLOGY SANGRUR TABLE OF CONTENTS • • • • • Declaration Certificate from the Organization Certificate of Supervisor (Guide) Acknowledgement Executive Summary Chapter-1 Introduction • 1. 2. 3. 4. 5. To the topic Overview of the Industry Profile of the Organization Need of the study Objectives of the study • Chapter-2 Research Methodology 1. 2. 3. 4. 5. 6. 7. Statement of the Problem Research Design Sampling Techniques used Selection of Sample Size Data Collection Statistical Tools Used Limitations of the Study Chapter-3 Data Analysis and interpretation Chapter-4 Conclusion and Suggestions o o o Annexure Questionnaire Bibliography DECLARATION I Anu bala a student of MBA, 2010-2012 batch, Bhai Gurdas institute of engineering and techonology, here by declare that the project on,” verification of banking transactions” is my original work and it has not previsously formed the basis for the award of any other degree, diploma, fellowship or other similar titles. It has been done under the guidance of A.N Prashar ( external guide) Anu Bala Anu bala BGIET sangur MBA3rd sem Date : . Jalandhar.CERTIFICATE This is to certify that the project report entitled verification of banking transactions” is a bonofide work carried out by miss Anu bala d/o SUB Jaimal Singh has been accomplished under guidance and supervision. All sources of information and help have been duly mentioned and acknowledged. This is an original work and has not been submitted by him anywhere else for the award of any degree/diploma. This project is being submitted by her in the partial fulfillment of the requirements for the award of the Master of Business Administration from Bhai gurdas institute of engineering and technology Punjab technical university. and to the staff of Punjab state power corporation for helping me in completing my project work and making it a great success. I would like to express my deep sense of gratitude to staff of BHAI GURDAS Institute of engineering and technology .A. I sincerely express my gratitude and lot of thanks to Mr. ANU BALA . I would thank all my friends. Sangrur who introduced me to the subject and under whose guidance I am able to complete my project.N Prashar. This report entitled “VERIFICATION OF BANKING TRANSACTIONS" is the outcome of my summer training at Punjab state power corporation limited(head office Patiala). Last but not least.ACKNOWLEDGEMENT I feel immense pleasure to give the credit of my project work not only to one individual as this work is integrated effort of all those who concerned with it. which made my project more appealing and attractive. I want to owe my thanks to all those individuals who guided me to move on the track. faculty members and all respondents who rendered their precious time for contributing their skills and to fill the questionnaire. EXECUTIVE SUMMARY . C H A P T E R .I INTRODUCTION . understanding. or transfer of cash or property that occurs between two or more parties and establishes a legal obligation. exchange. correctness. 2. . Comparison of two or more items. contract. or truth of the information. Also called booking or reservation.INTRODUCTION verification Definitions 1. and then lending out this money in order to earn a profit. banking Definition In general terms. Transaction Definitions 1. Alternative term for acknowledgment. General: Agreement. or the use of supplementary tests. to ensure the accuracy. the business activity of accepting and safeguarding money owned by other individuals and entities. Accounting: Event that effects a change in the asset. liability. (2) transfer of title which may or may not be accompanied by a transfer of possession. Banking: Activity affecting a bank account and performed by the account holder or at his or her request. Computing: Event or process (such as an input message) initiated or invoked by a user or computer program. . 5. regarded as a single unit of work and requiring a record to be generated for processing in a database. VERIFICATION OF BANKING TRANSACTION Verification of banking transaction is a process of verifying the amounts. Transactions are recorded first in journal and then posted to a ledger. 3. checks from the ledger of bank with remittance sheet which provides the accuracy. Every transaction has three components: (1) transfer of good/service and money. or net worth account. and (3) transfer of exchange rights. Verification is a alternative term for acknowledgement. 4.2. Commerce: Exchange of goods or services between a buyer and a seller. In a secure transaction (see ACID qualities) such events are regarded as a single unit of work and must either be processed in their totality or rejected as a failed transaction. correctness and truth of the information. BANK TRANSACTION The Bank Transactions report contains information about all transactions. you can use this report to research the information that affected those accounts. or if you need assistance in the monthly reconciliation process. including deposits. grouped by bank accounts and by date over a selected time period. CASH CONTROLS CASH Coins. currency. If you have questions about your bank account. and money on hand or on deposit at a bank or a similar depository. money orders. • . transfers. checks. INTERNAL CONTROL OVER CASH IS IMPERATIVE To safeguard cash and assure the accuracy of the accounting records. checks. and other bank activity. INTERNAL CONTROL OVER CASH AND RECEIPTS . .INTERNAL CONTROL OVER CASH AND DISBURSEMENTS USE OF A BANK Bank minimizes the amount of the currency that must be kept in hand Contributes significantly to good internal control over cash. RECONCILING THE BANK ACCOUNT Reconciliation: Is necessary as the balance per bank and balance per books are seldom in agreement due to time lags and errors. BANK STATEMENTS A bank statement shows: checks paid and other debits charged against the account Deposits and other credits made to the account Account balance after each day’s transactions The bank statement is a copy of the bank’s records sent to the customer for review. one by the company. A “double” record of cash is maintained. one by the bank. A bank Reconciliation: Should be prepared by an employee who has no other responsibilities pertaining to cash. RECONCILING THE BANK ACCOUNT Steps in preparing a bank reconciliation: Determine deposits in transit Determine outstanding checks Note any error discovered Trace bank memoranda to the records . These two accounts are reconciled. For example. The purpose of the preparing a Bank Reconciliation Statement is: (a) To ensure that all transactions that effect your bank account have been properly recorded in your accounting system. BANKING RECONCILIATION Bank Reconciliation Banks usually send customers a monthly statement that shows the account's beginning balance (the previous statement's ending balance). the cash payments journal). Bank Reconciliation Statement One of the most important tasks in the monthly Accounting cycle is to prepare a Bank Reconciliation Statement.e. (b) To check that your Bank has not made mistakes. and the account's ending balance. it would be possible that you may write out a cheque but forget to record it in your accounting system (i. all transactions that affect the account's balance during the month. Generally if the mistake is in your favour. you say nothing but if the mistake takes money . A failure to record transactions affecting your bank account would result in you not knowing how much funds you had available. In fact Banks do make mistakes.Each reconciling item used in determining the” adjusted cash balance per books” should be recorded by the depositor. the Bank Reconciliation statement tests the difference between the Bank Balance on your bank statement and the Bank Balance in your Accounting Systems. There will almost always be a difference.000 interest the account owner earned. Section B is where you list the differences between the transactions appearing on your bank statement and the transactions appearing on your cash journals. What does a Bank Reconciliation Statement look like? The illustration below shows a bank reconciliation statement as having two sections. However. a Bank accidentally deposited $50 million in a private bank account. then you complain to the bank! In one famous case. Wouldn't it be nice! Essentially. . the Bank failed to recover the $58. The Bank discovered its error after one week and took the money bank. Section A is where you calculate what your bank account balance from your own financial database.away from you. The computation is simple. This is where you will work out your Bank Balance at the end of the month.How do you get started? First work out Section A. . review of objectives. Then take away the payments. Monitoring is therefore to be perceived as a positive and constructive activity supporting the project and helping it to realise its objectives. Both desk and field monitoring have three functions/aspects: Preventive: information on the rules & procedures .Add the receipts for the month to the Bank Balance at the beginning of the month. activities planned • . Project monitoring Monitoring is a necessary core management instrument. With the start of Tempus IV phase a new monitoring approach has been introduced which links strong desk monitoring activities and a reinforced field monitoring policy. priorities. Benefits of monitoring extend beyond a given project. Then what is left should explain the difference between what you say is in your bank account and what the banks says. If you find any differences they must go in Section B. Each project should have embedded internal project monitoring arrangements to check progress / achievement of milestones. to maximise the return on investment of EU funds through the achievement of public policy objectives. This will give you the Bank Balance at the end of the month. The task of preparing a bank reconciliation statement is all about comparing the transactions on your cash payments journal and cash receipts journal with the transactions that appear on the Bank Statement. since lessons can be learnt and principles of best practice disseminated. The general objective of all monitoring activity is to maximise the impact of the programme and. Monitoring is critical to all projects. recognise the need for change / amendment / development and ensure quality. methodology. identify problems. both for projects that might be encountering problems and for projects which are encountering particular success. as importantly. You should neatly tick every transaction that appears in both places (the Bank Statement and the Cash Journals). Then work out Section B. the sustainability of the activities and programmes set up. the field monitoring concentrates more on the control of the activities. Tempus IV field monitoring policy is based on EACEA Tempus unit staff and National Tempus Offices monitoring to be complemented by external monitoring experts upon request by parent DG.an exercise of field monitoring can already take place at that time and issue recommendations. the multiplier effect. the links between the contents of the interim report and the situation on the ground and makes recommendations for the continuation of the project (period for field monitoring: after receipt of the interim report.period during which the organization and the management of the consortium are set up . the field monitoring will aim to evaluate the results of the projects. when the various activities of the project are deployed. its impact. in the last 6-12 months of the period of eligibility) Step 3: After the completion of the project.Advisory: recommendations on both the content and the financial aspects • • Control: check and assessment of the results There are three specific stages in project monitoring: Step 1: Following the launching of the project . In the . It is the responsibility of the Agency to follow the project cycle and to monitor closely the implementation of the selected proposals. its consequences. the results of which will appear in the interim report (period for field monitoring: 3 to 12 months after the beginning of the period of eligibility) Step 2: Following the interim report. financial management relating to the use of the grant (period for field monitoring: 3 to 15 months after the end of the period of eligibility). Importance of Bank Documents and Their Verification: The Law of evidence plays a pivotal role in the effective functioning of the judicial system. it can either be by joining a monitoring mission scheduled by an NTO or a specific project on which the local NTO is asked to accompany. Hence the documents and verification of documents are very important and very essential to establish the truth and to bring Justice. to report to the Agency and to propose recommendations. NTOs will mainly focus on: • • • Effectiveness Sustainability Efficiency Field monitoring is taken to be a major task amongst the NTO activities. . EC Delegations will be invited and their involvement is encouraged.framework of their terms of reference. Field monitoring reports prepared by NTOs facilitates desk monitoring operations by EACEA staff and is complementary to EACEA follow up of grant holder and partner activities.” Relevancy of facts is the key to determine the outcome of the judicial process which is based on fair trial without fear or favor and upholding the principles of Natural Justice and Human Rights. Hence the importance of evidence neither be overlooked nor be ignored. NTOs are required to undertake field monitoring of all projects involving HE institutions from their country (based on the monitoring plan approved by EACEA Tempus unit). The proof of evidence comes out of documents and the establishment of relevant evidence is through the verification of documents. When EACEA .Tempus unit undertakes field monitoring. In the field monitoring. In this case the NTO will therefore also participate actively in the field monitoring exercise. “The existence of substantive rights can only be established by relevant and admissible evidence. The bank customer is entitled to receive the certified copies the aforesaid documents and these constitute the first documents. such documents are called documentary evidence. Banks do not entertain any proposal without a preliminary discussion and if such discussion takes place neither the minutes of the meeting and discussion are recorded nor the gist of the meeting is made either by the bank or by the prospective borrower. There are oral and documentary evidences. If anybody nurses such a belief.Evidence means and includes(a) All statements which the court permits or requires to be made before it by witnesses. then he is wrong particularly for a borrower. 2. The very first day when a potential customer approaches a bank to establish a relationship with the bank through an interaction. Oral evidence need not constitute documentary evidence. After the receipt of the application for bank facilities along with the project . then the bank requests their prospective borrower to submit their application for bank facility containing all relevant information along with the detailed project report of the proposed venture. it becomes a documentary proof. such statements are called oral evidence. 3. (b) All documents including electronic records produced for the inspection of the court. 1. One of the most important aspects of opening the account is the introduction of KYC norms which should be complied with and it is mandatory. Thus an opportunity to create a document is lost for the prospective borrower. What are Bank documents? It is generally believed that the documents signed by any person to avail any facility alone are bank documents. in relation to matters of facts under inquiry. But when minutes of the meetings / discussions are recorded. The bank may also ask the borrower to open their account and start their bank transactions so that the bank can monitor their activities from the very beginning itself. If the discussion bears fruit. the potential document and the proof thereof begins. It may be noted if any meetings and discussions take place with regard to the proposed project. the borrower executes bank documents with regard to the facilities sanctioned so as to enable the bank to create security documents legally binding the bank and the borrower so that the bank becomes a secured creditor. 6. 5.report. the bank undertakes a technical feasibility and economic viability study in detail. If the borrower belongs to corporate sector. then they will have to create charge on the secured assets in favor of the bank within 30 days from the date of the documents. then a resolution is passed by the borrower company for having accepted the terms and conditions as per the bank’s requirement. then they can make a representation to the bank to reconsider their decision. After the sanction of the facility. then they have to give valid reasons for rejecting the proposal and if the borrower is not satisfied with the reasons. During the course of the study the bank may raise many queries and sometimes further discussion on the project proposal for which the borrower client will have to give convincing and satisfactory replies in writing. verification and satisfaction of the charges created. In case the bank rejects the proposal. All these form important documents. the bank releases the funds for the utilization of released fund for the purpose for which it is released. then they after discussion sanction the facilities and intimate the borrower in writing regarding the sanction with all relevant terms and conditions. If the borrower is a company incorporated as per the Company law. The entire process is carried out through various documents and these constitute very important documents. Subsequently there can be modification. Once the loan is sanctioned. The entire documents and papers and letters exchanged between the bank and the borrower become very important documents. . After fulfilling and complying with all the terms and conditions as stipulated in the sanction letter. the bank conveys their sanction in duplicate with all the terms and conditions to the borrower along with all relevant documents and related papers and get them acknowledged by the borrower for having accepted the terms and conditions. 4. If the bank is satisfied with the replies of the borrower customer. then the minutes of the meeting and discussion should be recorded in detail and duly acknowledged by the borrower and the bank officials. (b) Physical control. They are: (a) Quarterly Information System (QIS) Forms I and Form II (b) Half Yearly operating fund flow statement – Form III (c) Annual review. . credit monitoring by the bank begins. (d) Monthly statement of select operational data (MSOD). (c) Financial control. post sanction supervision. diligently. (e) Control at the instance of the bank. control and monitoring of credit may be divided into the following categories. (I) (a) Legal control. (II) Off-site and onsite inspection and supervision. Further it also helps the bank to prevent any slippage by which the account becomes an NPA. The supervision. (IV) Returns / statements submitted to the bank at the instance of Reserve Bank of India. honestly and focusing on the purpose for which the credit was granted. then it can prevent the account becoming NPA and the unit becoming sick. control and monitoring of credit becomes very important. (f) Monthly Stock Statement and yearly stock audit. (III) Off-site supervision by banks. The moment the bank releases the funds.7. Hence if credit monitoring is done judiciously. Since bank credit is purpose oriented. All the above items constitute yet other important documents more so they contain bank’s observations. 8. Besides, any correspondence exchanged between the borrower and the bank becomes documentary evidence. Hence the communication becomes an important toll which will aid the borrower to establish his credibility. 9. The importance of verification of documents is based on the fact that the contents of the documents may be proved either by primary or secondary evidence. The principle is “best evidence in the possession of power of the party must be produced. What the best evidence is, it must depend upon circumstances. Generally speaking, the original document is the best evidence. This is the general and ordinary rule; the contents can only be proved by the writing itself.” Further, “The contents of every written paper are, according to the ordinary and well established rules of evidence, to be proved by the paper itself, and by that alone, if the paper is in existence.” This tenet of law clearly establishes the need for verification of documents. Besides, the existence of any onerous clause or any distortion in the terms and conditions as laid down in the sanction letter or any breach of contract if any, and any lacuna that exists in the document can be established only by verification of the documents. 10. In the ultimate analysis, communication is the most vital aspect of any dealings particularly so in legal matters. It can make a life or mar a life. GENERAL OBJECTIVES OF THE SUBJECT Establish a rigorous and complete analysis of the different financial products and services currently offered by banking entities (banks, savings banks and credit cooperatives) in their operations. The initial framework encompasses all financial operations and services offered by banking institutions. This is studied from the standpoint of the customer, company or individual demanding the transaction or service in question. All transactions will be valued in money terms, and also according to the cost or effective yield of the transaction, which will be referred to as the effective return of the transaction. PRACTICALS IN THE COMPUTER CLASSROOM A computer will be used to quantify and simulate the main banking transactions (current accounts, credits, loans, bank discounts, etc.), with a dual objective: firstly, to determine credits and debits generated by each transaction; and secondly, to obtain the cost or effective rate of return of each transaction. METHODOLOGY AND WORK PLAN In order to achieve the abovementioned objectives, in addition to the normal theoretical practical classes, a work programme will be developed based on the quantification of bank transactions by computer. ASSESSMENT SYSTEM. Students attending the courses on a regular basis (examinations in February and July) will be assessed on the basis of continuous work and attendance at computer classes (1 point), the resolution of one practical banking transaction case study on computer (1 ½ points) and the completion of a theoretical-practical examination (7.5 points). In July and September the subject will only be assessed on the basis of completion of the theoretical-practical examination. PROGRAMME TOPIC 1.- BANKING SYSTEM AND BANK INSTITUTION. OBJECTIVES: An explanation will be offered of the functioning of the Spanish financial system and, within this system, the role played by the banking system. 1.1- Concept, function and elements of the financial system. 1.2- Current structure of the Spanish financial system. 1.3- Credit and banking system. 1.4- Banking institutions and banking business. 1.5- Organization of banking activity. 1.6- Banking transactions . TOPIC 2.- ASPECTS OPERATIONS. THAT DEFINE BANKING FINANCIAL OBJECTIVES: Review the basic concepts that define financial operations and which we will apply in the study of the different banking operations. 2.1- Value of money in time and interest rate. 2.2- Methods for calculating interest. 2.3- Nominal rate and effective interest rate (ERI). 2.4- Cost or effective rate of return of banking transactions. Equivalent annual rate (EAR). 2.5- Valuation dates. 2.6- Effective rate of return (ER). 2.7- Description, analysis and valuation of income. TOPIC 3.- PASSIVE BANKING OPERATIONS. OBJECTIVES: Study the main passive banking transactions in which banks obtain income, with special emphasis on the calculation of the profitability obtained by customers. 3.1- Introduction. 3.2- Current accounts: concept, description and operation. 3.3- Savings accounts 3.4- fixed-term deposits 3.5- Deposit certificates 3.6- Cash and treasury bonds. TOPIC 4.- ACTIVE BANKING OPERATIONS (I). OBJECTIVES: Study the different active bank operations in banking (Loans), calculating the effective cost for customers. 4.1Introduction. 4.2- Differences between credits and loans. 4.3- Credit accounts: description, repayment of interest and cost. 6.Promissory notes 7.4. 6.4. 5.Transfers 7. TOPIC 7. mortgage-based and currency-based 4. Documentary credit. calculating the effective cost for customers.6.1. 6.. with special emphasis on the study of documentary Credit. 5. 7. TOPIC 5.Applying for credit 4. Description. TOPIC 6.Exchange bills.International payment methods.2. 7..2.Cards 7. 7.Banking discount.5.3.Cheques.7.Guarantees.Sale-purchase of currencies and exchange risk. OBJECTIVES: Study the different active bank operations in banking (discount).Financing of imports. exchange rate hedging.2. repayment of interest and cost.Types of loans: Index-linked. OBJECTIVES: The different existing and new banking services are analysed in detail.ACTIVE BANKING OPERATIONS (II) .4.Introduction 7..5. 6.Introduction.Transfer of credit.Confirmed payment .4.1.Financing of exports 6. 6.BANKING SERVICES.INTERNATIONAL BANKING TRANSACTIONS. OBJECTIVES: Analyse different international banking transactions.1.3.5. 2. 9.Life insurance. TOPIC 8.Cash services 7.11.Renting. 9.7. TOPIC 9.16.Commercial reports 7.Direct debiting 7. 7. OBJECTIVES : Students will study other activities developed by financial entities and which have been incorporated banking entities. OVERVIEW OF THE INDUSTRY .9.1. 9. 9.Evaluation process and transfer of risks.14.1.Factoring 7.E-banking.Investment funds.13.Claims Services of the Bank of Spain.Cash transactions and payment and settlement orders 7. savings and retirement plans.10. 8.Financial Leasing.8.3.12.CONTROL OF BANK ACCOUNTS.2.Credit investments and risk analysis..Safes 7. OBJECTIVES: Students will analyse the activities of banking entities and the facilities available to customers who wish to present claims if they are not happy with the performance or service offered by same.4.OTHER FINANCIAL FIGURES AND ENTITIES.General criteria of performance of banking entities.15.Management of the collection of trade bills.5..Night drop box 7. 8.Transactions on marketable securities on the account of issuing entities. 8. 8.Pension plans.3. 7. 9.4.17. COMPANY OVERVIEW: Punjab state electricity board (PSEB) is a government organization engaged in power generation transmission and distribution. it came into existence with effect from 1st may.366 MW. The company is a statutory body formed in 1959 under the electricity supply. The company has a total installed capacity of 6. In its present form.841. PSEB owns and operates power stations based on various technologies including thermal and hydro. . 1967. The company produces and supplies electricity to the customers of the state of the Punjab. act 1948. PSEB also constructs and maintains its transmission and distribution system for supplying electricity to the various categories of consumers in the state. HISTORY OF PUNJAB STATE ELECTRICITY BOARD STATEMENT SHOWING OBJECTS AND REASONS FOR PLACING ANNUAL FINANCIAL STATEMENT FOR THE YEAR 2008-09 BEFORE THE LEGISLATIVE ASSEMBLY.5279-I&EL(7) 189/67/9563 dated 29. II. transmission and supply of electricity within the state. board is required to submit to the state of govt. of Punjab (I&P Deptt. According to the section 61 of the act.1967. To arrange the supply of electricity within the state and for the transmission & distribution of the same in the most efficient and economical manner. distribution and utilization of electricity within the state as per the provision of the act. in February each year. To prepare and carry out the schemes for establishing generating station and transmission and distribution of power within the state and to operate them etc.4. and use of electricity and to formulate perspective plan for the generation. III.) on first day of may.1948 (hereinafter called the act) vide it notification no. a statement of estimate capital and revenue receipt and expenditure for the ensuring year and the latter shall cause it to be laid on the . To collect data on demand for. Punjab State Electricity Board was constituted by govt. The board is charged with the following general under section 18 of the idbi act:I. IV.1967 under section 5 of the electricity (supply) Act. To exercise control in relation to the generation. but the said act. 1948. In order to fulfill this statutory requirement the annual financial statement of the board of the financial year 2008-09 as received from the board is placed before the state legislature. As such. The electricity Act-2003 has been enacted.table of the house of the state legislature for discussion. the financial statement has been prepared in compliance to section 61 of the electricity (supply) Act. SWOT ANALYSIS PUNJAB STATE ELECTRICITY BOARD SWOT ANALYSIS STRENGTHS: Diversified customer base Vertically integrated operations WEAKNESSES Delay in unbundling of PSEB Insufficient infrastructure OPPORTUNITIES Agreement and contracts Increasing demand for electricity in Punjab THREATS Extreme weather conditions Intense competition KEY COMPETITORS The Tata power company limited Uttar Pradesh electricity board . there is no guideline for preparation of annual financial statement and its laying on the table of state legislature. Auro energy limited Maharashtra state electricity board Gujarat state electricity corporation limited PROFILE OF THE ORGANISATION . . • The major objective of the study is to get accuracy of transactions. • The study of banking reconciliation and monitoring provide a accurate record of the transactions. PUNJAB STATE POWER LOCATION VIEW: . • The major objective for a trainee is to learn more about the banking transaction’s verification and its reconciliation. • The need for the study is to know about the overall transactions of the banking and its verification in the monitoring section of the PSPCL(Head Office). a trainee will be able to work well for the organization. OBJECTIVE OF STUDY… The following are the main objective which has been undertaken in the present study . • It gives a proper record of the banking transactions which provides exact knowledge about transactions. From the study I have learned very much. after understanding and collecting information about the organization.NEED OF THE STUDY… The need of the study arises because of the reason that a trainee must understand the company. • This study provides a written record about the transaction of many years. and its services offered. about the company as well as its product named “Verification of banking transaction” which will help me a lot at my future working days. its achievements and tasks. • It helps in removing the defaults. So that. . C H A P T E R .I I RESEARCH METHODOLOGY . RESEARCH METHODOLOGY RESEARCH METHODOLOGY Introduction and Meaning . Required to address these issues: designs the method for collecting information: manage and implements the data collection process analyses the results and communicates the finding and their implications. what the managerial problem is and the type of information that the research can generate to help the problem before conducting the fieldwork. as it determine precisely. Techniques The problem definition can be said to be the quite essential part of the research process. Census Technique 2. The availability of . Research is a careful investigation or inquiry especially through search for new facts in branch of knowledge: market research specifies the information. It is better to decide upon the method/technique of data collection. Research problem is the one which requires a researcher to find out the best solution for the given problem that is to find out the course of action. The census method is costlier and more time consuming as compared to sampling method but the result are near representatives than sample method. Generally. the action the objectives can be obtained optimally in the context of a given environment. there are two technique of data collection are: 1. Sample Technique or Convenient sampling A census is a complete enumeration of each and every unit of population where as in a sample only a part of the universe is studied and conclusion about the entire universe is drawn about that basis. Primary Data The primary data was collected to measure the customer satisfaction and their perception regarding ICICI Prudential Life Insurance. newspapers. Data Collection The objectives of the project are such that both primary and secondary data is required to achieve them. books. The primary data was collected by means of questionnaire and analysis was done on the basis of response received from the customers. So both primary and secondary data was used for the project. Data collection is refers to research design. 2. The questionnaire has been designed in such a manner that the consumer’s satisfaction level can be measured and consumer can enter his responses easily. .resources. Primary data. Secondary data. RESEARCH DESIGN… It’s mean to collect data for doing research. & websites etc. There are two types of data collection…. 1. time factor degree of accuracy desire and scope of the problem enable us to apply sample technique. The mode of collecting primary data is questionnaire mode and sources of secondary data are various magazines. The method section should provide enough information to allow other researchers to replicate your experiment or study. Secondary data: Secondary data are those which have already been collected by someone else and which have already been passed through the stratified process E. It has been collected through a { Personal meeting with senior} 2.Primary data: Primary data are those which are collected a fresh and the first time happen to be original in character. . You should provide detailed information on the research design. Books. Internet. Secondary Data The purpose of collecting secondary data was to achieve the objective of studying the recent trends and developments taking place in Life Insurance.g. journals . equipment. participants. materials. The method section of an APA format psychology paper provides the methods and procedures used in a research study or experiment. variables and actions taken by the participants. Difficulty: Average Time Required: Variable Here's How: Participants: Describe the participants in your experiment. including who they were. 1. For example: Two stories form Sullivan et al. Design and Procedure. how many there were and how they were selected.The method section should utilize subheading to divide up different subsections. equipment or stimuli used in the experiment. This may include testing instruments.'s (1994) second-order false belief . 2. technical equipments. Materials: Describe the materials. images or other materials used in the course of research. These subsections typically include: Participants. For example: We randomly selected 100 children from elementary schools near the University of Arizona. books. Materials. measures. attribution tasks were used to assess children's understanding of secondorder beliefs. Explain whether your experiment uses a withingroups or between-groups design. The examiner explained to each child that he or she would be told two short stories and that some questions would be asked after each . and the order in which steps occurred. The independent variables were age and understanding of second-order beliefs. For example: The experiment used a 3x2 betweensubjects design. 3. Design: Describe the type of design used in the experiment. how you collected data. Specify the variables as well as the levels of these variables. 4. Procedure: The next part of your method section should detail the procedures used in your experiment. Explain what you had participants do. For example: An examiner interviewed children individually at their school in one session that lasted 20 minutes on average. story. Methodology can be: define as ealier shown in the diagram: Research Methodology refers to search of knowledge . it was compiled. rules. "The analysis of the principles of methods. classified and tabulated manually and with help of computer. and postulates employed by a discipline". . Then the task of drawing inferences was accomplished with the help of percentage and graphic method.one can also define research methodology as a scientific and systematic search for required information on a specific topic. Analysis and Interpretation After the data collection. All sessions were videotaped so the data could later be coded. The word research methodology comes from the word “advance learner ‘s dictionary meaning of researches a careful investigation or inquiry especially through research for new facts in my branch of knowledge for example some author have define research methodology as systematized effort to gain new knowledge. Different suggestions given by me to the Company after analyzing the views of every respondent are also given in the report. What is a problem statement? In reviewing numerous manuscripts for possible publication in this peerreviewed journal. to understand the reality. Research is a systematized effort to gain new knowledge”. or have been applied within a discipline". the search for knowledge through objective and systematic method of finding solution to a problem/answer to a question is research. and there seems to be continuing confusion as to what comprises a problem statement. Research is the systematic process of collecting and analyzing information (data) in order to increase our understanding of the phenomenon about which we are concerned or interested. Truth means the quality of being in agreement with reality or facts. Research is the pursuit of truth with the help of study. comparison and experimentation. store. or summary of the content of the report comprise a problem statement? To add to the confusion. To do research is to get nearer to truth. definitions and explanations of techniques used to collect. 2. research methods textbooks in the social sciences do not clarify the matter. In other words. It also means an established or verified fact. observation. "The systematic study of methods that are. . A documented process for management of projects that contains procedures. can be. analyze and present information as part of a research process in a given discipline. although they may note that research examines problems or that it engages in problem solving. we repeatedly find that problem statements are absent or incomplete. hypothesis. 3.1. The study or description of methods This definition explains that research involves acquisition of knowledge. Are purpose and problem statement synonymous? Does a study objective. as well as reading numerous studies published in other journals. Research means search for truth. Hernon and Metoyer (Hernon & Metoyer-Duran. 1. identification of what would be studied. The components of a problem statement More than 30 years ago. over the years. it is important to address the “so what” question and to demonstrate that the research is not trivial). and 9. They discovered nine attributes that respondents associated with problem statements (Hernon & Metoyer-Duran.Dr. 3. clarity and precision (a well-written statement does not make sweeping generalizations and irresponsible statements). pp. one of us while a doctoral student took a course on reflectiveinquiry taught by one of the foremost researchers in higher education at Indiana University. 8. and justification (regardless of the type of Research.More than a decade ago. No use of unnecessary jargon. Identification of key concepts and terms. 1993. Articulation of the study's boundaries or parameters 6. 7. 1993. Furthermore. that conceptualization has gained resounding support from other researchers. 1994) supplied sample problem statements to researchers in library andinformation science and other social science disciplines in an attempt to investigate different attitudes toward the composition of a problem statement. 4. Hisconceptualization of a problem statement actually guides the expectations we have for allpapers submitted for review in Library & Information Science Research. identification of an overarching question and key factors or variables. Some generalizability. Conveyance of more than the mere gathering of descriptive data providing a snapshot. 2. benefits. 5. Conveyance of the study's importance. Metoyer Duran & Hernon. 82–83): 1. David Clark stressed that any problem statement in the social sciences should contain four components: . while avoiding the use of valueladen words and terms. In teaching students to write good problem statements. explanation of study significance or the benefits to be derived from an investigation of the problem. There is definitely a conflict or problem. require. A mere question. that is. Dr. adding that a statement of purpose indicates what the study will accomplish but does not place that goal or task in the context of a problem. andthe lead-in helps set up the third component and attract a readership. with an identification of what the study would do. Clark viewed a purpose statement as part of the third component. suppose that two people do not get along. distressful. Some researchers. interlocking short sentences: (1) the lead-in. . and 4. 2. prefer to substitute an overarching question or two for a purpose statement. he required them to develop three.g. Clark also reminded his students that a subsequent section of reflective inquiry covers“objectives. Lead-in. For illustrative purposes. vexing. however.” and therefore these components are out of scope for the problem statement. he noted. Indication of the central focus of the study. Declaration of originality (e. the conduct of research.1. or might there be other ways to resolve the conflict? The problem statement must clearly indicate the former. if the intention is to support a research study. mentioning a knowledge void. however. and hypotheses. 3. For example. and (3) a justification. which would be supported by the literature review).. Does the resolution of the problem. research questions. a conflict or something unsettled. perplexing. (2) a statement about originality. and in the need of investigation. Study significance must survive the “so what” question as well as the “how so” question. Dr. does not identify a problem. does information seeking behavior encompass information dissemination practices? Does the second sentence Narrow and sufficiently clarify the intent of the proposed study? And. The problem statement is the first proposition. so I to accomplish the objective conducted the study. To do so. Third. The purpose is to persuade or gain acceptance of the conclusion. research is like dealing with a set of propositions in a debate or an argument adhering to the principles of logic. it is essential for others to accept the first and all subsequent propositions.Communities? A background section could address such matters. PSEB: The main objectives of research are: OBJECTIVE . finally. Once the problem statement is written. the third sentence will have to result in a paragraph or two that addresses how the data to be gathered will useful for public library service improvement and planning. In a sense. and we need to accept it before considering the next proposition. OBJECTIVES OF THE RESEARCH The company wants to know about the awareness of bank transaction sector. the remaining parts of the research study should flow from it. Creation of digital map is a step towards building a web enabled GIS for transmission network to know physical identification of its assets. VISION & VALUE Our vision: Our vision of the head office is to provide the accurate report about the bank transactions by verifying the MT’S of the bank ledger in the monitoring section of the head office PSPCL.The main objective of this Expression of interest (EOI) is to explore the technologies available in this field and to develop a suitable project/ tender document including detailed technical specification after studying the offer. To know about the future plans of people. efficiently and conveniently. building transparency in all our dealings. This we hope to achieve by: • • Understanding the needs of and offering them superior services. The state of art technology would also assist in taking up optimum route survey of future lines and maintaining an accurate map of the network. Developing and implementing superior risk management and investment strategies to offer sustainable and stable returns to our policyholders • • And above all. Leveraging technology to service customers quickly. . To know the scope of as an investment opportunity and to know the priority of people while selecting different Saving Schemes. . I I I DATA ANALYSIS AND ITS INTERPRETATION .C H A P T E R . Be able to recognize examples of different kinds of . • What is meant by data coding and why is it carried out.e. • Basic meaning of various terms used to characterize the mathematical attributes of different kinds of variables. • By publishing their data and the techniques they used to analyze and interpret that data. i. • Options for data cleaning – range checks. scientists give the community the opportunity to both review the data and use it in future research. • DATA ANALYSIS AND ITS INTERPRETATION • Need to edit data before serious analysis and to catch errors as soon as possible. dichotomous. • Scientists interpret data based on their background knowledge and experience. consistency checks – and what these can (and can not) accomplish.DATA ANALYSIS Data collection is the systematic recording of information. discrete. measurement. categorical. thus different scientists can interpret the same data in different ways. ratio. count. data interpretation involves explaining those patterns and trends.. nominal. interval. data analysis involves working to uncover patterns and trends in datasets. ordinal. continuous. • What is meant by a “derived” variable and different types of derived variables. meta-analysis. confidence level. Type II error. and interpretation of multiple significance tests are all purely for your edification and enjoyment. Bayesian versus frequentist approaches. Computation of p-values. • • What is a confidence interval and how it can be interpreted.variables and advantages/disadvantages of treating them in different ways. power. asymptotic tests. I encourage a nondogmatic approach to statistics (caveat: I am not a “licensed” statistician!). as far as EPID 168 is concerned. z-tables. • Concepts of Type I error. 2-sided tests. Fisher’s exact test. since after all of the hard work and waiting they get the chance to find out the answers. significance level. 1-sided vs. not for examinations. statistical “power”. ___________________________________________________________ __________________________________ Data analysis and interpretation Epidemiologists often find data analysis the most enjoyable part of carrying out an epidemiologic study. and the relationship among these concepts and sample size. statistical precision. the meaning of the outcomes from such tests. intracluster correlation. Objectives of statistical hypothesis tests (“significance” tests). or sample size will not be asked for on exams. and how to interpret a pvalue. If the data do . In general. confidence intervals. however. Analysis . from findings and questions from studies reported in the literature. a considerable amount of preparatory work must usually be carried out.major objectives 1. The analyst then turns to address specific questions from the study aims or hypotheses. attempting to collect this reward. So when the new investigator. compare the available study population with the target population) .not provide answers. to explore and gain a “feel” for the data. and from patterns suggested by the descriptive analyses. Describe the study population and its relationship to some presumed source (accounts for all in -scope potential subjects. finds him/herself alone with the dataset and no idea how to proceed. analysis and interpretation of the study should relate to the study objectives and research questions. the feeling may be one more of anxiety than of eager anticipation. though. Data do not. that presents yet another opportunity for creativity! So analyzing the data and interpreting the results are the “reward” for the work of collecting the data. They reveal what the analyst can detect. “speak for themselves”. As with most other aspects of a study. Evaluate and enhance data quality 2. Before analysis begins in earnest. One often-helpful strategy is to begin by imagining or even outlining the manuscript(s) to be written from the data. The usual analysis approach is to begin with descriptive analyses. All data collection activities are monitored to ensure adherence to the data collection protocol and to prompt actions to minimize and resolve missing and questionable data. Monitoring procedures are instituted at the outset and maintained throughout the study. Estimate measures of frequency and extent (prevalence. the data collection plan. Evaluate impact or importance Preparatory work – Data editing In a well-executed study. nonresponse.3. there is often the need to “edit” data. Nevertheless. since the faster irregularities can be detected. including procedures. Before forms are keyed (unless the data are entered into the computer at the time of collection.g..g. the greater the likelihood that they can be resolved in a satisfactory manner and the sooner preventive measures can be instituted. Assess the degree of uncertainty from random noise (“chance”) 7. instruments. e. refusal. comparison groups) 4.. means. Estimate measures of strength of association or effect 6. Seek further insight into the relationships observed or not observed 9. both before and after they are computerized. . incidence. The first step is “manual” or “visual editing”. is designed and pretested to maximize accuracy. and attrition. Assess potential for bias (e. through CATI – computer-assisted telephone interviewing) the forms are reviewed to spot irregularities and problems that escaped notice or correction during monitoring. medians) 5. Control and examine effects of other relevant factors 8. and forms. usually need to be coded. written comments from the participant or data collector. inconsistent or out of range responses) at this stage. if there are any.. Range checks compare each data item to the set of usual and permissible values for that variable. and other situations that arise. have numbers or letters corresponding to each response choice).. (Coding will be discussed in greater detail below.g.) It is possible to detect data problems (e. Even forms with only closed-end questions having precoded responses choices may require coding for such situations as unclear or ambiguous responses. Visual editing also provides the opportunity to get a sense for how well the forms were filled out and how often certain types of problems have arisen. Preparatory work – Data cleaning Once the data are computerized and verified (key-verified by double-keying or sight-verified) they are subjected to a series of computer checks to “clean” them. Range checks are used to: • Detect and correct invalid values • Note and investigate unusual values Note outliers (even if correct their presence may have a bearing on which statistical methods to use) • .Open-ended questions. multiple responses to a single item. Codes for keying may also be needed for closed-end questions unless the response choices are “precoded” (i.e. but these are often more systematically handled at or following the time of computerization. the approach used to handle inconsistency can have a noticeable impact on estimates and can alter comparisons across groups. All types of data (e. For example. so this consistency check is “soft”.• Check reasonableness of distributions and also note their form.. 2000).. since that will also affect choice of statistical procedures Consistency checks Consistency checks examine each pair (occasionally more) of related data items in relation to the set of usual and permissible values for the variables as a pair. Consistency checks are used to: • Detect and correct impermissible combinations • Note and investigate unusual combinations • • Check consistency of denominators and “missing” and “not applicable” values (i.. laboratory tests) must be coded. College students are generally at least 18 years of age (though exceptions can occur. verify that skip patterns have been followed) Check reasonableness of joint distributions (e. males should not have had a hysterectomy. in scatterplots) In situations where there are a lot of inconsistent responses.g. Authors should describe the decision rules used to deal with inconsistency and how the procedures affect the results (Bauer and Johnson. not “hard”). questionnaires. Preparatory work – Data coding Data coding means translating information into values suitable for computer entry and statistical analysis.g. medical records.e. though in some cases the . so that consistency can be achieved and the inevitable questions (“How did we deal with that situation?”) answered. attempting to represent the “essential” . Mathematically. The following Types of variables . with an eye towards their analysis. in principle. A continuous variable (sometimes called a “measurement variable”) can be used in answer to the question “how much”. and blood pressure can. but these can be regarded as points on a continuum. What information exists? 2. so that for any two allowable values there are other allowable values in between. of course. be represented by continuous variables and are frequently treated as such in statistical analysis. Analytic techniques depend upon variable types Variables can be classified in various ways. The objective is to create variables from information. In practice. A continuous variable takes on all values within its permissible range. the instruments used to measure these and other phenomena and the precision with which values are recorded allow only a finite number of values. What information is relevant? 3. a discrete Variables summarize and reduce data. height.levels or scales of measurement Constructs or factors being studied are represented by “variables”. How is it likely to be analyzed? It is important to document how coding was done and how issues were resolved. questions underlie coding decisions: 1. Variables (also sometimes called “factors”) have “values” or “levels”. Measurements such as weight.coding has been worked out in advance. information. 9 could all be used in place of 1.2. gender) or polytomous (more than two categories).g. events. 4. 2.g.5.. Identification – a variable that simply names each observation (e. If the values of a variable can be placed in order. or some other countable phenomenon. ethnicity). ABO blood group. with no inherent ordering. and the analyst’s judgment about interpretability.4. 3.g. for which the question “how many” is relevant (e.3). clinic number. 1.22. Nominal variables can be dichotomous (two categories.Examples are injury severity and socioeconomic status. Nominal – a categorization or classification. a study identifying number) and which is not used in statistical analysis.g.2... parity.69.g. 6. Discrete variables that can take any of a large number of values are often treated as if they were continuous. 6. number of siblings). Ordinal – a classification in which values can be ordered or ranked.. Types of discrete variables 1.g. 3.2. then whether the analyst elects to treat it as discrete and/or continuous depends on the variable’s distribution.variable can take only certain values between its maximum and minimum values. the requirements of available analytic procedures. since the coded values need only reflect the ranking they can be replaced by any others with the same relative ranking (e. Count – the number of entities. e. to substitute other numbers for the variable’s value . the set of all rational numbers is countable though unlimited in number).. even if there is no limit to the number of such values (e.5. the values or the variable are completely arbitrary and could be replaced by any others without affecting the results (e.. That is. . After either transformation. anxiety. especially if the range is large. Scale scores can also be multiplied by a constant. There is a non- arbitrary zero point. the scale can be shifted: 11-88 could be translated into 0-77 by subtracting 11.g. count variables are often treated as continuous.would change its meaning. so values of the scores have meaning only in relation to each other.g. with a mean of 40. but subject A’s score is no longer 1. a change of units) will distort the relationships of the values of a variable measured on the ratio scale. Psychological scales (e. subject A’s score is still twice as far from the mean as is subject B’s.. Types of continuous variables 1. Without loss of information. Ratio – both differences and ratios are meaningful. 2. but ratios of values are not. if the variable takes on the values 1188. Any transformation other than multiplying by a constant (e.5 times the mean score. But it is not meaningful to say that subject A’s score is “1. Interval – differences (intervals) between values are meaningful. The reason is that the zero point for the scale is arbitrary. so it is meaningful to characterize a value as “x” times the mean value. Kelvin or absolute temperature is a ratio scale measure.. In epidemiologic data analysis. Physiological parameters such as blood pressure or cholesterol are ratio measures.5 times the mean”. it is meaningful to state that subject A’s score of 60 is “twice as far from the mean” as subject B’s score of 50. depression) often have this level of measurement. An example from physics is temperature measured on the Fahrenheit or Celsius scale. Variables created during coding attempt to faithfully reflect the original data (e.e. For example. Variable values are often collapsed into a small number of categories for some analyses and used in their original form for others. nominal with two levels) – case vs. It is necessary to ask such question as: “Is ‘more’ really more?” and “Are thresholds or discontinuities involved?” Again.Many variables of importance in epidemiology are dichotomous (i.g.. In general: • Simpler is better . the underlying reality (or. unexposed. For an apparently ordinal or continuous variable.g. the construct overweight is often represented by a variable derived from the values for height and weight. but it is also often necessary to create additional variables to represent constructs of interest. collapsing six possible values to a smaller number) and deriving compound variables (e. weight). exposed vs. Data reduction includes simplifying individual variables (e. our conceptual model of it) determines the approach to quantification. height.. Often these variables can be used directly for analysis. Preparatory work – Data reduction Data reduction seeks to reduce the number of variables for analysis by combining single variables into compound variables that better quantify the construct. rather. the phenomenon itself may not warrant treatment as such.noncase.. “socioeconomic status” derived from education and occupation).g. and other nonlinearities • • • • • Categorize based on the nature of the phenomenon (e. Types of derived variables • Scales . a study of pregnancy rates will require a finer breakdown below 30 years and even below 20 years). Inspect detail before relying on summaries Verify accuracy of derived variables and recodes by examining cross tabulations between the original and derived variables. Cronbach’s alpha gives the . then the only differences in their values should be due to random errors of measurement... Take into account threshold effects. self-esteem) all of the items are intended as individual measures of the same construct. depression. If the items did indeed measure the same construct in the same way and were indeed answered in an identical manner. The scale score is usually the sum of the response values for the items.g. The purpose of deriving a scale score by having multiple items is to obtain a more reliable measure of the construct than is possible from a single item. a study of Down’s syndrome can collapse all age categories below 30 years.• Avoid extraneous detail Create additional variables. “I feel happy” in a depression scale) must be inverted. saturation phenomena. Scale reliability (internal consistency) is typically assessed by using Cronbach’s coefficient alpha. which can be thought of as the average of all of the inter-item correlations.g.In a pure scale (e.g. rather than destroy the original ones (never overwrite the raw data!).. though items with negative valence (e. g. When the scale consists of separate subscales. were measured. internal consistency may be more relevant for the individual subscales than for the scale as a whole. • number of close friends). Examples of derived from several variables include socioeconomic status (e. “effective” . Values of 0. education. major depressive disorder. neighborhood).g. Here.g.An index consists of a group of items that are combined (usually summed) to give a measure of a multidimensional construct. each of the items measures a different aspect or dimension.80 or greater are considered adequate for a scale that will be used to analyze associations (if the scale is used as a clinical instrument for individual patients. types of partners. use Items may have different weights. e.A procedure that uses a set of criteria according to specific rules or considerations. of condoms. so that internal consistency measures like Cronbach’s alpha are either not indexes relevant or require a different interpretation.. income. between each item and the remaining items (item-remainder correlation). Psychometrics). anal intercourse). its alpha should be at least 0.90 – see Nunally’s textbook.proportion of the total variation of the scale scores that is not attributable to random error.. and among groups of items (factor analysis) are standard methods of analyzing item performance. Analyses of relationships between individual items (inter-item correlation or agreement). • Indexes . social support (e. marital status.. occupation. between each item and the total scale (item-scale correlation). depending upon their relative importance and the scale on which they Algorithms . number of close family members. sexual risk behavior (number of partners. an analysis involving 10 variables. skewness. percentage above a cut-point Dispersion -standard deviation. measurement) Location . For one. quantiles. which can be confusing and tiresome to explain. • • • Look for relationships in data Look within important subgroups Note proportion of missing values Preparatory work – Missing values Missing data are a nuisance and can be a problem. analyses that involve multiple variables (e. could result in excluding as much as 50% of the dataset (if there is no overlap among the missing responses)! Moreover. missing responses mean that the denominators for many analyses differ. Do the patterns make sense? Are they believable? • Observe shape – symmetry vs.Mean median. Thus.g. ordinal. unless data are missing . Also. Preparatory work – Exploring the data Try to get a “feel” for the data – inspect the distribution of each variable. even if each has only 5% missing values.contraception (I have not seen this term used to designate a type of variable before. Examine bivariate scatterplots and cross classifications. discontinuities • Select summary statistics appropriate to the distribution and variable type (nominal. coefficient alpha.. but I am not aware of any other term for this concept). crosstabulations. regression models) generally exclude an entire observation if it is missing a value for any variable in the analysis (this method is called listwise deletion). preventive fraction). prevalence) and extent (means. especially of the latter. Evaluation of hypotheses After the descriptive analyses comes evaluation of the study hypotheses. and impact (attributable fraction.completely at random (MCAR– equivalent to a pattern of missing data that would result from deleting data values throughout the dataset without any pattern or predilection whatever). survival time). other forms of bias. etc. to which we will now turn. follow-up time. Much of the field of statistics has grown up to deal with this aspect. One aspect of both descriptive analysis and hypothesis testing. potential alternative explanations for what has been observed. because certain subgroups will be underrepresented in the available data (a form of selection bias) Descriptive analyses Exploration of the data at some point becomes descriptive analysis. Concept of hypothesis testing (tests of significance) . association (differences and ratios). Here there will be a more formal evaluation of potential confounding. to examine and then to report measures of frequency (incidence. is the assessment of the likely influence of random variability (“chance”) on the data. then an analysis that makes no adjustment for the missing data will be biased. These measures will be computed for important subgroups and probably for the entire study population. if the study has identified any. Standardization or other adjustment procedures may be needed to take account of differences in age and other risk factor distributions. incorrectly.Since we have a decision between two alternatives (H0 and HA) we can make two kinds of errors: Type I error: Erroneously reject H0 (i. the Type I error probability has received more attention and is referred to as the “significance level” of the test.. that data are not consistent with the model) Type II error: Erroneously fail to reject H0 (i. conclude.e. incorrectly. and “efficiency”) Traditionally. conclude. “power”. “precision”. . that data are consistent with the model) (The originator of these terms must have been more prosaic than the originators of the terms “significance”.e.. The inventory stocked less than the demand may lead to the business out of the market. This is probably true about all branches of knowledge and specially true for inventory management area.The inventory stocked in excess of demand may lead to drastic price cuts. With rather tight monetary market. along with the transmission and distribution system. the uninterrupted supply of electrical power adequately as and when required ensuring the quality.Analytical Study of Inventory Management In Punjab State Electricity Board* INVENTORIES ARE VIEWED by most of the business world as a large potential role and not as a measure of wealth as was prevalent in old days . There is a constant fear in the minds of businessmen because of uncertainty in the market situations. The material function in power Industry has a distinct importance as every power plant. Widening gulf between theory and practice has become remarkable phenomena in this age of science and technology. the practice is lagging far behind. so as to be saleable before it becomes worthless because of obsolescence. whether to stock or not to stock. is committed to provide the consumers at his premises. When the frontiers of knowledge are widening and the theory is developing at fast rate. optimisation of resources through proper inventory control becomes one of the major challenges for the material managers in every organization. The entire power system is one line process and failure of any vital component in the process results into partial or total outage of the Industry. Problems Studied . reliability and economy of supply at the same time with emphasis on overall economy. Inventories play essential and pervasive role in the power sector. Treatment given to wastages. spoilage and dead inventory by the stores were taken. New Inventory control technique applied to improve the efficiency of material management department and to reduce cost of inventory. Inventories held in the stores. 4. For this purpose. detailed research has been conducted as follows : 1. were studied. Existing purchase system of the Board was observed. Detail organization structure of stores of Controller of Stores Deptt. 5. issuing procedures. Existing system of inventory control adopted by the board was studied. record relating to purchases were analysed. techniques of stores control adopted by the board were analysed. purchase policies. In which organization structure of procurement deptt.. . 3. 2..In the study. efforts have been made to conduct a detailed analysis of inventory management functions in the PSEB. their receiving. Then introduction of PSEB is explained. Moreover the boards’ thermal power plant. . Major Findings of the Research The study has been divided in 7 chapters. An attempt has been made to summarise and present the theories and important concepts of inventory management. This helped together actual prevailing conditions of purchasing. Second chapter is titled as ‘Literary Survey of Inventory Management Techniques’. level employees of the material management deptt. After explaining the need and importance of inventory management techniques. was also visited many times to study the inventory management systems in Ropar thermal power plant.Research Methodology The relevant data and information have been collected from primary as well as from secondary sources. of the board. some of the inventory management techniques have been explained. First chapter is related with ‘Introduction’.E. of the P. For collection of information detailed questionnaires pertaining to raw-material management general and purchase control as well as stores control specifically were drafted and got filled up from all the senior personnel and some from supdt.S. In this chapter. overall position of electricity in India has been mentioned. maintaining and controlling of materials/components.B. Direct interviews were also conducted from the concerned deptt.. Some of these are. are mentioned like receipts. fixation of delivery schedules etc. issues. purchasing records. suppliers performance and Rating. preparation of records.Name of the third chapter is ‘Procurement System in PSEB’. purchase research and purchase ethics. transfer.414 Finance India Evaluation of performance of Purchase activities. Then store procedures followed in the stores of controller of stores deptt. security measures adopted in the stores. there are other many activities which are to be performed in purchase deptt. reporting system adopted by the stores. the procurement system followed by the board has been explained. damages etc. receipt and issue of materials. In the beginning of the chapter. Theoretical concepts of purchasing have been discussed in the beginning of the chapter. Then. 35 questions relating to purchase activities were asked. A seven point scale was used for the purpose of evaluation of store activities. if performed. maintenance of material. then not adequately in the CPO. training to the employees. theoretical concepts of store keeping have been discussed. Buyer seller Relations (Suppliers Goodwill). It is concluded that store employees are stressing mainly on routine/ordinary types of activities in their respective stores like. inter-store transfer. a detailed questionnaire was prepared and circulated among the employees of Central Purchase Organization (CPO). CPO has adopted standardised purchase procedure and the employees of CPO are following these procedures like making comparative statements. . For the purpose of analyzing purchase activities. Name of the Fourth Chapter is ‘Stores and Stores Control in PSEB’. preparing purchase order and forwarding it to suppliers. However. Purchase Budgeting. persuing legal matters with supplier in the court. But these are not performed. preparing routine reports and records including shortages. Efforts are also made to calculate storage cost of inventory. standardization. store wise replenishment of stock etc. So these techniques are suggested for better control in the stores of COS deptt. 1 crore approximately. maintaining various levels of inventories. There are so many store control techniques which can be used for managing stores efficiently. replenishment of stocks in stores. Here efforts are made to apply some more Inventory control techniques for efficient management of purchase and store activities in material management department.Inventory Turnover Ratio of 25 Major items of COS deptt were calculated for the years of 1988-89. 1989-90 and 1990-91 respectively.Whereas more activities are required for running stores efficiently e. Only store verification activities are done seriously by store employees and stock verifiers appointed by the department of Material Service. ABC analysis. By comparing different years Inventory turnover ratio of each item. Mechanical handling is done only for bulky material. Some of these are. Fifth chapter is framed for analyzing inventory management techniques in PSEB. Total wages paid to these employees are Rs. There are 597 number of work charge/Daily wage workers for material handling in the stores.g. cost reduction programmes. Some of these activities we are discussing here.6% approximately. inventory turnover ratio. measuring efficiency of stores. Efforts should be made to handle the materials mechanically so that cost of handling can be reduced. it is concluded that the stock level of each item is not maintained properly in different years by COS deptt. Inventory Turnover Ratio:. care has not been taken properly on above mentioned activities in store department. training to the store employees. Is coming 34. making in the stores. The carrying cost of COS deptt. levels of inventory. Major material handling activities are performed manually in controller of store Department (COS). Abstract of Doctoral Dissertations 415 . efficient material handling. So details’ regarding procurement.Economic Order Quantity: .00 (approx) by applying EOQ technique. This chapter has started with some salient features of RTP.00 and carrying cost is 34. & C category. 8. Efforts are made to classify store items into A. issue and handling of coal and oils in the RTP has been written. The sixth chapter is named as ‘Inventory Management in Thermal Power Plant’ in the study. Different problems faced by the RTP relating to coal have been highlighted. Store of RTP. procurement system used in the RTP was studied. material handling systems followed and store control techniques used in the stores were discussed. is coming Rs. Various records prepared by the RTP relating to these inputs has also been mentioned in the chapter. EOQ of 25 major items. it was analyzed that only few items are coming under A & B category.(EOQ) The CPO is procuring items worth Rs. As we know coal and oils are the main inputs of the Thermal power plants.53.6%. activities performed in stores.900.36. in which.B. After that. Order cost per order calculated is coming Rs. 1. Chapter seventh concludes our study.00. have been calculated. So strict policies can be developed and implemented for controlling A & B category of items. 5.000. In the study.94. 14 items out of 1374 are in A category.00 (approx). Then the organization structure of entire plant has been explained. The employees of the CPO are procuring these items on the basis of their past experience. This chapter integrates the finding and recommendations of the study . Ropar were visited and store activities were analyzed. 56 are in B category. There will be saving of more than Rs.4 ‘Crores’ in a year if items are purchased through EOQ system. 100 crores every year.00. different policies are developed for A. Whereas total inventory cost as per PSEB’s purchase procedure for 25 major items. Ropar. maintenance. Ropar Thermal Power Plant is covered under the study. Total inventory cost of 25 major items of COS is coming Rs.B & C category of items respectively so that the store items can be controlled properly. . . C H A P T E R .V I CONCLUSION AND SUGGESTIONS . BIBLIOGRAPHY . Delhi. “Marketing Management” (Prentice Hall India. 1999) • Kotler.com . Lesis Publishers. Boca Raton.. Biostatistics for epidemiologists.N. Philip.BIBLIOGRAPHY Books • Mishra..in.com/quicksearch director-opac@psebindia. Anders. June 16.in www. 2004) Magazines and Journal • • • The Charted Accountant Journal.. Web sites • • • • & Co. Florida.tendertiger. “Insurance Principles and Practices” (S.org Connect. 1993. 2006 Financial express Ahlbom.pspcl. M. 214 pp. ANNEXURE .
https://www.scribd.com/document/65950235/Project-Report
CC-MAIN-2018-13
refinedweb
12,197
50.84
Cedric Le Goater <clg@fr.ibm.com> writes:> This patch adds the user namespace.>> Basically, it allows a process to unshare its user_struct table,> resetting at the same time its own user_struct and all the associated> accounting.>> For the moment, the root_user is added to the new user namespace when> it is cloned. An alternative behavior would be to let the system> allocate a new user_struct(0) in each new user namespace. However,> these 0 users would not have the privileges of the root_user and it> would be necessary to work on the process capabilities to give them> some permissions.It is completely the wrong thing for a the root_user to span multiplenamespaces as you describe. It is important for uid 0 in other namespacesto not have the privileges of the root_user. That is half the point.Too many files in sysfs and proc don't require caps but instead simplylimit things to uid 0. Having a separate uid 0 in the different namespacesinstantly makes all of these files inaccessible, and keeps processes fromdoing something bad.To a filesystem a uid does not share a uid namespace with the only thingsthat should be accessible are those things that are readable/writeableby everyone. Unless the filesystem has provisions for storing multipleuid namespaces not files should be able to be created. Think NFS rootsquash.> Signed-off-by: Cedric Le Goater <clg@fr.ibm.com>> Cc: Andrew Morton <akpm@osdl.org>> Cc: Kirill Korotaev <dev@openvz.org>> Cc: Andrey Savochkin <saw@sw.ru>> Cc: Eric W. Biederman <ebiederm@xmission.com>> Cc: Herbert Poetzl <herbert@13thfloor.at>> Cc: Sam Vilain <sam.vilain@catalyst.net.nz>> Cc: Serge E. Hallyn <serue@us.ibm.com>> Cc: Dave Hansen <haveblue@us.ibm.com>>> ---> fs/ioprio.c | 5 +> include/linux/init_task.h | 2 > include/linux/nsproxy.h | 2 > include/linux/sched.h | 6 +-> include/linux/user.h | 45 +++++++++++++++> init/Kconfig | 8 ++> kernel/nsproxy.c | 15 ++++-> kernel/sys.c | 8 +-> kernel/user.c | 135 ++++++++++++++++++++++++++++++++++++++++++----This patch looks extremly incomplete.Every comparison of a user id needs to compare the tuple(user namespace, user id) or it needs to compare struct users.Ever comparison of a group id needs to compare the tuple(user namespace, group id) or it needs to compare struct users.I think the key infrastructure needs to be looked at here as well.There needs to be a user namespace association for mounted filesystems.We need a discussion about how we handle map users from one usernamespace to another, because without some form of mapping so manythings become inaccessible that the system is almost useless.I believe some of the key infrastructure which is roughly kerberosauthentication tokens could be used for this purpose.A user namespace is a big thing. What I see here doesn't evenseem to scratch the surface.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/7/11/596
CC-MAIN-2018-43
refinedweb
494
61.02
You use random. Are you on qpicks? No. Random is simply used to initialize the value of "weights" in order to run gradient descent. Without random values, it is very complex to choose your own weight values - mathematically speaking, it could take more than the universe has existed to find all possible values for a 3 dimensional vector, for example. You can do manually. As oppose to an easier 3 dimensional matrix; Have you seen this: Lottery prediction using GA+BF ANN+FL(GeneticAlogrithm+ArtificalNeuralNetwork+FuzzyLogicControl) based on SciPy/NumPy/matplotlib Hi, The main reason of this post, is to hopefully find help (from someone) in finishing the algorithm to find patterns in previous lottery draws in order to predict future numbers. This algorithm is still not complete. But to answer your question, the outlook of the algorithm is to find hidden variables in order to compute predictions with zero error, such using Artificial Intelligence. In choosing the numbers, I have found that prior numbers drawn prior to the day of game play, provide higher chances of giving a true prediction of today's winning numbers. Reality is, the idea is for machine learning or a computer to predict next winning numbers on a given game. This is a great topic.. I do believe it may be possible to make accurate predictions on such a game like the pick 3 win 4 (ideally) and take 5 any Mega and powrball is a different beast but it may be possible to conquer.. From my experiences I have come to the conclusion that the lottery results stem from the first day it was ever played on any game.. Now this is something almost out of the universe the correlations from then and now but is possible to create an advantage without even going to day 1.. Its is something to keep in mind as we all play these games and stop looking for answers that mostly mean nothing.. the game is based og calculated risks and some people seem not to believe in an educated guess (hypothesis) yes u may get lucky here and there but an educated guess(s) every draw is by far more powerful to succeeding in the game.. The lottery ball catcher shown on tv and our qp generator are 2 different entitys so it kills me when people say more people win on qp's than any other way into he game.. that is pure luck that the two come together.. Brother honestly to cracking the code of a game like win 4 may be easier than math we can talk more about this ...tell me your take Simplify.. " What's more likely to happen will happen.. " Million dollar Ops. As of late I have been learning much about machine learning, basically trying to rehearse how computers learn and possibly use this method to gain an "achievement" with the lottery!!? lol - Thus far I wanted to present what I currently have figured out in terms of coding using python to predict a vector of 5 dimensions or a pick 5 game. What I have, is very simple - however complex if you're not familiar with programming and machine learning. I thought to share my code and see if there's any one here whom would like to contribute in this project and make it complete. So far the following functions have been called; One and only one input training sample for X (Multiple batches are needed to predict better), likewise, only one sample for y. There are 5 inputs, 5 outputs and 4 hidden layers. 5 random weights per neuron connection wljk. The sigmoid function is assigned to each activation layer. And sigmoid prime which depicts the amount of error for the output y-hat. import numpy as np #Sample training inputX = np.array(([3,5,20,23,26]), dtype=float)y = np.array(([3,20,25,28,30]), dtype=float) X = X/np.amax(X, axis=0)y = y/36 #Max number size is 36 class Neural_Network(object): def __init__(self): #define Hyperparameters self.inputLayerSize = 5 self.outputLayerSize = 5 self.hiddenLayerSize_1 = 7 self.hiddenLayerSize_2 = 7 self.hiddenLayerSize_3 = 7 self.hiddenLayerSize_4 = 7 #weights (parameters) self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize_1) self.W2 = np.random.randn(self.hiddenLayerSize_1, self.hiddenLayerSize_2) self.W3 = np.random.randn(self.hiddenLayerSize_2, self.hiddenLayerSize_3) self.W4 = np.random.randn(self.hiddenLayerSize_3, self.hiddenLayerSize_4) self.W5 = np.random.randn(self.hiddenLayerSize_4, self.outputLayerSize) def forward(self, X): #propagate inputs through network self.z2 = np.dot(X, self.W1) self.a2 = self.sigmoid(self.z2) self.z3 = np.dot(self.a2, self.W2) self.a3 = self.sigmoid(self.z3) self.z4 = np.dot(self.a3, self.W3) self.a4 = self.sigmoid(self.z4) self.z5 = np.dot(self.a4, self.W5) yHat = self.sigmoid(self.z5) return yHat def sigmoid(z): #Apply sigmoid activation function to scalar, vector or matrix return 1/(1+np.exp(-z)) def sigmoidPrime(z): #Derivative of sigmoid function return np.exp(-z)/((1+np.exp(-z))**2) NN = Neural_Network() yHat = NN.forward(X) print yHat print y Still, what's missing is adding; Backpropagation, Computing the cost function with respect to each derivatives weight, Numerical gradient checking, training the network, Testing and overfitting. Any one interested in adding information is welcome. This is a software to get a visual sense in how Neural Networks can perform predictions in the background. I suggest you work on stocks or sports betting as NN don't work on random. I have built many and tried just about every prediction method. Sure you can build something that will give you the best numbers to play based on the history of the game, just don't expect the best numbers to show in the next game or the one after that or even the game after that. Using a NN to come to a solution looks attractive until the drawing. At best it might get a couple numbers, just enough to keep you chasing you tail so to say. Check the math forum as there is a ongoing NN topic with download. RL .... "just don't expect the best numbers to show in the next game" Isn't that what trying to predict winners is all about? *73 seconds.Copyright © 1999-2022 Speednet Group. All rights reserved.
https://www.lotterypost.com/thread/300991
CC-MAIN-2022-33
refinedweb
1,055
55.34
US5615400A - System for object oriented dynamic linking based upon a catalog of registered function set or class identifiers - Google PatentsSystem for object oriented dynamic linking based upon a catalog of registered function set or class identifiers Download PDF Info - Publication number - US5615400AUS5615400A US08085187 US8518793A US5615400A US 5615400 A US5615400 A US 5615400A US 08085187 US08085187 US 08085187 US 8518793 A US8518793 A US 8518793A US 5615400 A US5615400 A US 5615400A - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - class - sub - function - set - Abstract. 1. Field of the Invention The present invention relates to dynamic linking of client applications with function sets or classes used by the client applications; and, more particularly, to systems for dynamically linking a client application at run time with libraries of function sets or classes registered either before or during execution of the client application. 2. Description of the Related Art Traditionally, an application's source files are compiled in object modules and then linked together with other object modules, generically called libraries, to form a complete stand-alone application. This is called static linking. A disadvantage of static linking is that each application that links with the same library routine has its own private copy of the routine. Most of the size of the applications comes from the library code linked to each application. Another disadvantage of static linking is that the functionality that the application gets from the library is fixed. If the library has a bug in it, the application has to be re-linked with the new library to get the bug fixed. Dynamic linking, sometimes called late binding, differs because the application code and library code are not brought together until after the application is launched. If the code in the library is not loaded until it is actually required, then this is called dynamic loading. If the same copy of library code being used by one application can be used by other applications at the same time, then the library is called a shared library. Dynamic linking of a class or function set involves binding code ("client application") which uses a class or function set to the code which implements the class or function set at run time. Thus, the term "dynamic" in this context, means "occurring at run time". Linking entails both loading the code and binding imported references to exported implementations of the classes or function sets. Existing dynamic linking systems do not provide class level or function set level dynamic linking. Instead, the linking is done at the level of the individual functions which may be exported by a library and imported by a client. However, each individual function must be exported by the library and each function used by the client must be imported in such prior art systems. To complicate matters, the name of the functions after compilation is not the name of the same in the source code (i.e., C++). Thus, the developer must deal with so-called "mangled" names to satisfy parameters of the dynamic linking systems of the prior art. Among other limitations, prior individual function level binding systems cause the implementation of a client to be dependent on a particular set of classes known at build time. Thus, new derived classes cannot be added later without having to rebuild the client. Further, prior art dynamic linking systems do not provide for dynamic installation of the linking system itself. In some cases, after a new linking system is installed, the host system must be rebooted or at least the client application has to be restarted. Existing dynamic linking systems are designed to support procedural programming languages and do not provide object oriented dynamic linking. Since some object oriented languages are derivatives of procedural languages (e.g., C++ is a derivative of C) these systems can sometimes provide dynamic linking of an object oriented language, provided the programmer deals with a class by the awkward approach of explicitly naming all members of the class. Nonetheless, these systems do not directly support object oriented programming languages. Accordingly, it is desirable to optimize a dynamic linking system to object oriented programming systems involving class level or function set level dynamic binding. Furthermore, such system should be robust, supporting efficient use of internal memory, and versioning of function sets or classes. Finally, it is desirable to provide for dynamic registration of updated or new libraries, so that a client application need not be restarted in order to take advantage of new versions of its libraries. The present invention provides explicit support for object oriented languages, such as C++, MCL, and Dylan. This support includes linking by class or class identifier where a library exports a class and a client imports a class. The client has access to all of the public virtual and non-virtual member functions of such dynamically linked classes. Also, the client may instantiate an object of a class which was not known at compile time. In this case, the client can call public virtual member functions using a known interface of one of the parent classes of a class. The system of the present invention employs a dynamic function set catalog which can be queried directly or indirectly by the client. Since, ultimately, the implementation of a class is a set of functions, it is possible to dynamically link classes. The dynamic function set or class catalog is updated from a catalog resource when a library is registered and when a library is unregistered with the system. Each registered function set or class is given an identifier when registered. The system is particularly suited to object oriented programming environments, where for a function set which characterizes a class; a direct catalog query by a client can determine for a given identifier, and the corresponding information in the catalog (1) whether the class is available, (2) the class IDs of its parent classes, (3) the class IDs of its derived classes, and (4) whether the class can be dynamically instantiated by a class ID. These functions enable clients to dynamically determine the availability and compatibility of classes, and enable new functionality to be delivered in the form of new shared class libraries and added to a client without recompiling the client. Since all code that implements a class can be dynamically linked, the client is not dependent on the implementation of the library code. A library can be fixed or enhanced without having to rebuild the client or other libraries. This simplifies the development and distribution of fixes and enhancements, and is generally better than existing patching mechanisms. Since other dynamic linking systems are not aware of classes as a distinct entity, the concept of class identification and class catalog management does not exist in these systems. Accordingly, the present invention can be characterized as a system for managing code resources for use by client applications in a computer, wherein the computer has internal memory storing at least one client application. The apparatus comprises a resource set catalog stored in the internal memory. The resource set catalog identifies a plurality of function sets by respective function set IDs. Further, the resource set catalog includes set records which characterize the implementation of functions within the respective sets. A dispatch engine, in the internal memory, linked with a client application, supplies a particular function set ID in response to a call by the client application of a particular function which is a member of a function set identified by the particular function set ID. A lookup engine in the internal memory, coupled with the resource set catalog and the dispatch engine, is responsive to the particular function set ID to look up a set record. The resource set catalog is characterized as containing set records for function sets, where a class for an object oriented system is a type of function set. Thus, where the term "function set" is used in the claims, it is intended that a species of function set may be a class. According to one aspect of the invention, the dispatch engine includes a dispatch record which is linked with the client, and stores a particular function set ID corresponding to a function set of which the called function is a member. Also, a dispatch routine is included in the dispatch engine, which is linked to the dispatch record and the lookup engine, and responsive to the call to the particular function to supply the particular function set ID to the lookup engine. In one preferred embodiment, the dispatch routine includes a first level dispatch segment linked to the client and to a global variable in the internal memory, and a second level dispatch segment linked to the global variable and the lookup engine. According to yet another aspect of the present invention, the dispatch record includes a function link cache and a set link cache. The function link cache stores a link to the particular function which is supplied by the link engine in response to the return of the particular function to the client. The dispatch engine includes a routine which looks at the function link cache for a cached link to the particular function and jumps to the particular function in response to the cached link if present. The set link cache stores a link to the set record for the set of functions including the particular function which had been previously called by the client. The link engine includes a routine that supplies the link to the set link cache in response to return of a particular function to the client. The dispatch engine includes a routine which looks in the set link cache for a cached link to the set record, and returns the set record to the link engine in response to the cached link upon a call to a function which is a member of the function set. Thus, a function call can be executed quickly if the function link cache is full, with a medium level of speed if the set link cache is full, and more slowly if a catalog search is needed to bind the function implementation. The invention further provides for assignment of version numbers to function sets according to a standard protocol. The dispatch record in this aspect includes version information linked with the client indicating a minimum version number supported by the client for the function set of which the particular function is a member. The set record includes a version number for the corresponding function set. The link engine in this aspect includes a routine which is responsive to the version information in the dispatch record and the version number in the set record to insure that the client supports a version of the particular function in the function set. In addition, the function sets are assigned serial numbers when loaded in internal memory. The dispatch record further includes a serial number linked with the client indicating a serial number of the corresponding function set when the set link cache is filled. The set link cache stores a pointer to a link structure in internal memory and the link structure includes a pointer to the set record having the particular function set ID. The set record stores the assigned serial number for the function set when it is loaded in internal memory. The link engine includes a routine responsive to the serial number in the set record, and the serial number in the dispatch record to insure validity of the set link cache entry, and a routine for clearing the link structure when the corresponding function set is loaded. The invention further provides for a use count record within the set record. The link engine in this aspect includes a routine to increment the use count when a client application binds with the function set corresponding to the set record, and to decrement the use count when a client application frees the function set corresponding to the set record. When the function set characterizes a class, the use count is incremented when a constructor for the class is called, and decremented when a destructor for the class is called. Thus, using the use count, the memory management system can unload function sets which are not in current use by any active applications. Since a client can enumerate all derived classes of a given class by class ID, it can determine what classes are available dynamically. The set of available classes can be extended at run time when new libraries are registered and a client can instantiate a class even though it is added to the system after the client was launched. New classes can be copied into a library directory or folder in the file system at any time and are automatically registered, or a new folder or file can be explicitly registered as a library container by a client. Dynamic registration of libraries of function sets or classes is accomplished using the class catalog. Because the class catalog is not bound with clients, all that needs to be done to register a new library, is to write the appropriate records into the class catalog. Once the appropriate records are written into the class catalog, the new library becomes available to the new client. The procedures outlined above are in place to protect the client from using a version of a class or function set which it does not support, and for finding an unloaded and reloaded version of a particular function which it has already used. The invention further provides for insuring type safety of the dynamically linked classes and function sets by means of shared library functions specifically designed to take advantage of the class catalog to insure such safety. The particular functions include the new object function, by which a client application may obtain information needed to construct a new object using the shared library manager, with reference to the class ID of the new object. Thus, using a class ID, the library manager looks up the information about the class in the class catalog, and returns a constructor for the class to the client. The client is then able to call the constructor, even if it did not know at the time it was written or compiled about the class. In addition, the library manager provides a verify class routine, by which a client application may verify the parent of a particular derived class for type safety. Finally, a cast object routine is provided, by which a client may cast an instance of a particular object as a parent class object. This routine utilizes the class catalog to return offsets within the particular object to the elements of the parent, even though the client application may not have been aware of the structure of the parent at the time it was written or compiled. Accordingly, it can be seen that the present invention provides a dynamic class catalog which given a class ID can be queried to determine whether the class is available, the class IDs of the parent class or classes, the class ID or IDs of the derived class or classes, and whether the class can be dynamically instantiated by class ID. When an object is instantiated, the library or libraries which implement the class and its parent classes are dynamically loaded. An object can be instantiated by a client which had no knowledge at compile time of the class of the object. If such client was compiled with the interface of a parent class of the object, then the object can be used as if it were an instance of the parent class. The system is particularly suited for object oriented dynamic linking which enables dynamic library registration, dynamic inheritance, and on-demand type-safe dynamic instantiation of objects by class identifier. Other aspects and advantages of the present invention can be seen upon review of the figures, the detailed description, and the claims which follow. FIG. 1 is a schematic diagram of a computer system implementing the shared library manager of the present invention. FIG. 2 is a schematic diagram of the resource set catalog used according to the present invention. FIG. 3 is a diagram of the data structures involved in the dynamic binding of the present invention. FIGS. 4A-4C provide the "C" language definition of the stub record, client VTable record and class VTable records according to a preferred embodiment. FIGS. 5A and 5B provide a flowchart for the basic dispatching architecture used with the shared library manager of the present invention. FIGS. 6A, 6B, and 6C provide a flowchart for the GET CLASS VTABLE RECORD step 112 of FIG. 5B. FIG. 7 is a schematic diagram of the library registration function, and organization. FIG. 8 is a flowchart illustrating the operation of the dynamic registration routine using the structure of FIG. 7. FIGS. 9A and 9B illustrate a new object routine executed by the shared library manager. FIG. 10 illustrates a variant of the new object routine used when type safety is desired to be verified. FIG. 11 is a flowchart for a verify class routine executed by the shared library manager. FIG. 12 is a flowchart for a cast object routine executed by the shared library manager. FIG. 13 is a flowchart for a GetClassInfo routine executed by the shared library manager. A detailed description of preferred embodiments of the present invention is provided with respect to the figures. FIGS. 1-13 provide a high level overview of enabling various aspects of the present invention. A detailed description with references to segments of the source code follows a description of the figures. FIG. 1 shows a computer system in which the present invention is loaded. The computer system includes a host CPU 10 which includes a plurality of registers 11 used during execution of programs. The CPU is coupled to a bus 12. The bus communicates with an input device 13 and a display 14 in the typical computer system. Further, non-volatile memory 15 is coupled to the bus 12. The non-volatile memory holds large volumes of data and programs, such as libraries, client applications, and the like. A high speed memory 16 is coupled to the bus 12 for both data and instructions. The high speed memory will store at least one client application 17, a shared library manager 18, shared library manager global variables at a predetermined address space within the memory 16, exported libraries 20, and other information as known in the art. According to the present invention, when a client application is compiled, a number of items are provided within the application. These items include a stub record, stub code, a library manager interface, a client VTable record, and a first level dispatch routine. The shared library manager will include a library builder routine, a resource set catalog, a second level dispatch routine, class VTable records for registered libraries, a lookup function, and a link function. As mentioned above, the resource set catalog provides information for function sets or classes which are available to a client. The stub record points to the client VTable record within the client. The first level dispatch routine uses information in the client VTable record to call the second level dispatch routine. The second level dispatch routine calls the lookup function to find information about the called function in the resource set catalog. That information is provided to a link engine in the form of a class VTable record which links the client to the particular function that it has called. A particular protocol for using these features of the client and the shared library manager are described below with reference to FIGS. 5 and 6. The implementation of the resource set catalog, also called a class catalog herein, is shown in FIG. 2. A class catalog is a record 30 which includes a first field 31 which stores a number indicating the number of exported classes in all the library files which have been registered with the catalog 30. Next, the catalog includes an array 32 which stores class information records, one per exported class. The class information record in the array 32 consists of a structure 33 which includes a number of parameters. This structure includes a first field 34 named library which points to the library in charge of the code for the particular class or function set. A second field 35 stores a class serial number which is unique for each instance of the class; that is, it is incremented on registration of the class. A next field 36 stores a VTable record pointer which is the pointer to the VTable record for this class. A next field 37 stores a class ID, which is a class identifier for the class. A next record 38 stores the parent class ID. This is the class ID for a parent of this class. Next, a plurality of flags are stored in field 39 which are defined in more detail below. A next field 40 is a class link pointer. This points to a link structure for establishing a link with the client VTable record. Field 41 stores a version parameter indicating a version of the class implementation. The class information record is loaded from a library resource having a structure described below with reference to FIG. 8. FIG. 3 schematically illustrates the records used for dynamically linking a function call in a client to a class or function set. The figure is divided generally along line 50. Where elements above line 50 are linked with the client during compile time, and elements below line 50 are linked with the shared library manager. Thus, the client includes a stub record 51 which provides a function link cache for a pointer to the implementation of the called function, and a pointer 52 to a client VTable record 53. The client VTable record stores the class identifier, for the class or function set, and a class link pointer providing a set link cache. The class link pointer 54 points to a link structure 55 which stores a pointer 56 to a class information record 57 in the class catalog. The class information record includes a pointer 58 to the class VTable record 59. The class VTable record includes a pointer 60 to the actual VTable of the class, and a pointer 61 to an export table for non-virtual functions. If the class link pointer 54 is null, then a lookup function is called which accesses the class catalog 62 to look up the class information record 57 for the corresponding function. If that function is registered, then the class information record 57 is supplied, and the class VTable record 59 may be retrieved. The figure also includes a schematic representation of a load engine 64. The load engine is coupled with the class information record 57. If the class corresponding to the class information record is not loaded at the time it is called, then the load engine 64 is invoked. When the information is moved out of longterm storage into the high speed internal memory, the class information record 57 and class VTable record 59 are updated with the correct pointers and values. FIGS. 4A, 4B, and 4C respectively illustrate the actual "C" definitions for the stub record, client VTable record, and class VTable record according to the preferred implementation of the present invention. Details of these structures are provided below in the description of a preferred embodiment. They are placed in these figures for ease of reference, and to illustrate certain features. It can be seen that the client VTable record (FIG. 4B) includes version information (fVersion, fMinVersion) which indicates a current version for the function set or class to which the class link pointer is linked, and the serial number (fClassSerialNumber) for the same. The class information record 57 of FIG. 3 also includes version information for the currently loaded class for a function set, and the serial number for the currently loaded class or function set. These fields are used for insuring version compatibility between the client and the currently loaded library, as well as validity of the link information. The class VTable record 59 (FIG. 4C) also includes a use count parameter (fUseCount). The use count parameter is incremented each time a class is constructed, and decremented each time a class is destructed. When the use count returns to zero, the class or function set is freed from internal memory. FIGS. 5A and 5B provide a flowchart for the basic implementation of the run time architecture. The algorithm begins by a client application calling a class constructor or a function by name (block 100). The stub code, generally outlined by dotted line 90, in the client with a matching name refers to the linked stub record (block 101). The stub code then tests whether the stub record includes the address for the constructor or function in its function link cache (block 102). If it does, then the stub code jumps to the address (block 103). This is the fastest way in which a function may be executed. If the stub record did not include a valid cached address for the function, then the stub code calls a first level dispatch routine (block 104). The first level dispatch routine is generally outlined by dotted line 91. The first step in this routine is to load a pointer to the library manager interface in the client in a host register (block 105). Next, the offset to the second level dispatch routine is read from the SLM global variables in internal memory (block 106). Next, the second level dispatch routine is jumped to based on the offset read in block 106 (block 107). The second level dispatch routine is generally outlined by dotted line 92. The second level dispatch routine begins by pushing pointers for the client library manager interface and stub record onto the stack (block 108). Next, a lookup function is called for the class catalog (block 109). The lookup function is generally outlined by dotted line 93. The first step in the lookup function is to take the stub record and library manager interface for the client (block 110). Using the information, the lookup function retrieves the class identifier for the named class or function set from the client VTable record (block 111). Next, the class VTable record (a set record) is retrieved based on the class ID (block 112). This step can be accomplished using the class link pointer for cached classes or function sets, or requires a lookup in the class catalog. Once the class VTable record is found, a link engine, generally outlined by dotted line 94, executes. The first step in the link engine is to get the function array pointer from the class VTable record (block 113). Next, the array is searched for the particular called function (block 114). Next, a pointer to the function is stored in the stub record for the client providing a function link cache value (block 115). Finally, the function pointer is returned to the second level dispatch routine (block 116). The second level dispatch routine then cleans up the process and jumps to the function (block 117). FIGS. 6A, 6B, and 6C provide a flowchart for the step of block 112 in FIG. 5B, which returns a class VTable record in response to the class ID. Thus, the algorithm begins by taking the class ID as input (block 150). Using the class ID, the class catalog object is called which first compares the class serial number in the client VTable record to a global start serial number maintained by the shared library manager. The shared library manager insures that all serial numbers of valid clients are at least greater than the global start serial number. If this test succeeds, then the algorithm loops to block 152 where it is determined whether the class link pointer in the client VTable is null. If it is null, then the algorithm branches to block 161 in FIG. 6B. However, if the pointer is not null, then the class record is retrieved based on the information in the link pointer. After retrieving the TClass record, the class serial number in the client VTable is compared with the same in the TClass record. If they do not match, then the algorithm branches to block 161. If they do match, then the library record TLibrary for the class is retrieved from the information in TClass (block 155). The library record indicates whether the VTables for the class are initialized (block 156). If they are not, then the algorithm branches to block 162. If they are initialized, then the "code" serial number in the client VTable is compared with the same in the library record (block 157). If they do not match, then the algorithm branches to block 162. If the code serial numbers match, then the class VTable record is returned in response to the information in the TClass record (block 158). After retrieving the class VTable record, the class VTable record use count is incremented as well as the library use count (block 159). After block 159, the algorithm is done, as indicated in block 160. FIG. 6B illustrates the routine for handling the branches from blocks 152, 154, 156, and 157. For branches from blocks 152 and 154, this routine begins with block 161 which calls a lookup class function to get the TClass record from the class catalog (block 161). After retrieving the TClass record, the class VTable record is retrieved in response to the information in TClass (block 162). Next, the class VTable record use count and library use count are incremented (block 163). The class VTable record is reviewed to determine whether it includes pointers to the VTable and the export table for the class (block 164). If there are pointers, then the algorithm is done (block 165). If not, then a set up function is called to initialize the class VTable record (block 166). The setup function is described in FIG. 6C. This algorithm begins with an access to the library record to determine whether the code is loaded (block 167). If the code is loaded, then the library use count is incremented (block 168). If the code is not loaded, then the library is loaded, and its use count incremented (block 169). After either block 168 or 169, the client VTable record is retrieved from the library, and the pointer to the class VTable record is retrieved (block 170). The pointer taken from the client VTable record in the library is used to get the class VTable record (block 171). Next, the cache links in the client VTable record in the client, and the same in the loaded library, and the cache links in the TClass record are updated with information about the class VTable record (block 172). Finally, the algorithm is done (block 173). FIG. 7 is a schematic diagram of the structure for the library's files which can be dynamically registered in the class catalog. The library files are normally stored on disk 200. These library files are registered in an extensions folder 201 which is graphically illustrated on the user interface of the device, such as the Macintosh™ computer. The extension folder includes a number of management files 202, and a plurality of library files 203, 204, 205. The library files are all registered in the class catalog. The library files having a file type "libr" 206 include a number of segments, including code resources 207, a dictionary "libr" resource 208, and dictionary "libi" resource 209. The code resources 207 include the library code segments such as classes and generic function sets. The dictionary "libr" resource describes the library. Dictionary "libi" resource fists libraries which this library depends on. The dictionary "libr" resource 208 is shown in more detail at block 210. This resource includes a library ID field 211 which is the character string identifying the library. A second field 212 in the structure 210 identifies the code resource type. A third field 213 identifies the template version number for the library file. A fourth field 214 identifies the version number of the library, which is maintained according to a standard protocol by the developers of the libraries. The next field 215 stores a plurality of library flags. The next field 216 identifies the number of elements in an exported class information array 217. This information array 217 includes one record per exported class in the library. Each record includes a class identifier 218, class flags 219, a version number of the current version of the class 220, a minimum version number for the backward compatible version number for the class 221, a number 222 which indicates the number of elements in the following array 223 of parent class IDs. The army 223 of parent class IDs includes the identifier of each parent of the class. The information in this resource 210 is used to create the class information records for the class catalog upon registration of the library. FIG. 8 illustrates the basic algorithm for dynamically registering libraries using the structure of FIG. 7. As indicated at block 300, the operating system designates a special folder such as Extensions folder 201 in FIG. 7, and applications may designate additional folders for registered libraries. The Library Manager intercepts operating system calls which indicate a file has been copied or moved. Then it is determined whether the file subject of the call is in the special folders or one of the additional folders (block 301). If a new file is found, then it is determined whether the new file is a shared library resource. If it is, then library is registered in the class catalog by providing information to fill the TClass records for the classes in the class catalog (block 302). If a library has been moved out of the folder, then the library manager will leave it registered until it is no longer in use. After it is no longer in use, then it is moved out of the class catalog (block 303). The shared library manager also provides functions which are linked to the clients, and allow them to take advantage of the class catalog for various functions. Particular routines include a new object routine which is described with reference to FIGS. 9A and 9B, a variant of the new object routine shown in FIG. 10, a verify object routine shown in FIG. 11, a cast object routine shown in FIG. 12, and a GetClassInfo routine shown in FIG. 13. As indicated in FIGS. 9A-9B, the new object routine takes as input the class identifier for the new object and an indication of a memory pool for allocation of the object (block 320). Using this information, it is determined whether the memory pool has actually been allocated for the object. If not, then the pool is allocated (block 321). Next, a lookup class function is called in the class catalog to get the TClass object for the identified class (block 322). Next, flags maintained in the TClass object indicate whether the new object routine is supported for the particular class (block 323). If it is supported, then it is determined whether the TClass record for the class is associated with a library other than the root library. If it is a root library class, then it is guaranteed to be always loaded, and the use content is incremented (block 324). If the class is associated with a library other than a root library, a load function is called to either load the library and then increment the use count, or if the library is already loaded then just increment the use count (block 325). Next, the TClass object is used to retrieve the class VTable record (block 326). Using the flags in the class VTable record, it is determined whether the new object routine is supported for the class (once again) (block 327). Next, the size of the object is determined from the class VTable record, and memory of that size is allocated to the pool (block 328). Next, a pointer to the allocated memory is retrieved (block 329). The library manager requires that the second slot in the export table for classes in the library contain a pointer to a constructor for the class. Thus, the pointer for the constructor is retrieved from the export table location indicated in the class VTable record (block 330). Using the constructor pointer, the constructor is called which places an object in the allocated memory (block 331). Next, the use count is decremented by one to balance the use count (block 332). This is required because the load function step of block 325 increments the use count, as does the calling of a constructor in block 330. Thus, the decrementing of the use count is required for balancing the use counts. After the decrementing of use count in block 332, then the algorithm is done (block 333). Thus, a client is able to obtain and call a constructor for a class of which it was not aware at compile time. FIG. 10 illustrates a variant of the new object routine which has better type safety. In particular, if the client is aware of the parent of the new object to be created, then the variant of FIG. 10 can be called. The variant of FIG. 10 takes as input the parent identifier for the parent of the class, the class ID of the new object to be created, and the memory pool allocation parameters (block 340). Next, a verify class function which is described with respect to FIG. 11 is called to insure that the identified class is derived from the identified parent (block 341). If this verification is successful, then the new object routine of FIGS. 9A and 9B is called for the identified class (block 342). (See Appendix, TLibraryManager::NewObject (two variants)). FIG. 11 illustrates the verify class routine called in block 341 of FIG. 10. Also, this routine is available to the clients directly. The verify class routine begins by taking the class ID of the base class and of a derived class (block 350). Next, the lookup class function of the class catalog is called to get the TClass record for the derived class (block 351). The TClass record for the derived class will include a list of parent classes. This list is reviewed to determine whether the identified parent class is included (block 352). If the parent class is found, then the algorithm is done (block 353). If the parent class is not found, then go to the TClass record of each parent of the derived class in order. The list of parents for each parent is reviewed to find the identified parent class, following this recursion to the root class. The algorithm ends when the parent is found, or the root class is reached. (See Appendix TLibraryManager::VerifyClass and :: internal VerifyClass). If there is more than one immediate parent in the list of parents for the TClass record, then a case of multiple inherency is found. In this case, the parent class hierarchy must be reviewed to insure that at least one parent appears as a virtual base class (block 355). If the identified parent in the call of verify class is not found as a virtual base class, then it is determined not related to the derived class for the purposes of this function. FIG. 12 is a flowchart for a cast object routine executed by the shared library manager and linked to the clients by the required structure of the objects of the shared library manager environment. Using this function, a client can cast an object of a derived class as a parent class object, even though it may not have been aware of the structure of the parent at the time it was compiled. This algorithm takes as input a pointer to the object and an identifier of the parent class (block 400). Next, the class of the object is determined based on the required structure of the object (block 401). This structure involves placing the pointer to the VTable as the first dam member in the object. Also, the first slot in the VTable is a pointer to the VTable record of the shared library manager. Using the class ID of the object determined from the class VTable record, and the parent ID provided when cast object was called, the verify class routine is then called (block 402). If the verify class succeeds, then it is determined from the class VTable record of the object, whether single or multiple inheritance is the case is found (block 403). If a case of single inheritance, then the algorithm is done because the offsets are determinate in that case (block 404). If a case of multiple inheritance is encountered, then the correct offsets are found within the object to the data of the parent class (block 405). This can be determined based on the place of the parent class in the hierarchy of parent classes found in the TClass records. (See Appendix TLibraryManager::CastObject and ::interact CastObject). GetClassInfo is a member function of TLibraryManager, or a non-member function is provided which calls gLibraryManager->GetClassInfo for you. Given a class id which specifies a given base class and an error code pointer it returns a TClassInfo for the given class if the class is registered in the class catalog. The TClassInfo can then be used to iterate over all of the derived classes of the given base class. If the class is not registered or an error occurs GetClassInfo returns NULL. (See Appendix, TLibraryManager::The GetClassInfo). The GetClassInfo algorithm is shown in FIG. 13. It involves the following: Steps: 1. Call the LookupClass function with the class id. If this returns a pointer that is not NULL then we know that the class is registered with the class catalog. If it returns NULL then set the error code to kNotFound and return NULL for the function return value (block 500). 2. Using the fClasses field of fClassCatalog, which is a TCollection class instance, call the CreateIterator function. If the iterator cannot be created then return NULL for the function return value and set the error code to kOutOfMemory (block 501). 3. Create a TClassInfo instance and set the fIterator field to the iterator returned in step 2. Set the fBaseClassID field to the given class id. If the TClassInfo cannot be created then return NULL as the function return value and set the error code to kOutOfMemory (block 502). 4. Call the Reset function of the iterator to re-start the iteration. Return the TClassInfo as the function return value and set the error code to kNoError (block 503). The functions of TClassInfo which include using the TClassInfo instance returned by GetClassInfo:: ______________________________________virtual void Reset( );virtual void* Next( ); // safe to cast to TClassID* or char*virtual Boolean IterationComplete( ) const;// TClassInfo methodsTClassID* GetClassID( );Boolean GetClassNewObjectFlag( ) const;Boolean GetClassPreloadFlag( ) const;size.sub.-- t GetClassSize( ) const;TLibrary* GetLibrary( ) const;TLibraryFile* GetLibraryFile( ) const;unsigned short GetVersion( ) const;unsigned short GetMinVersion( ) const;______________________________________ Copyright Apple Computer 1991-1993 Data members of TClassInfo include: ______________________________________TClassID fBaseClassID;TClassID fClassID;TLibrary* fLibrary;TLibraryFile* fLibraryFile;unsigned short fVersion;unsigned short fMinVersion;Boolean fNewObjectFlag;Boolean fPreloadFlag;Boolean fFunctionSetFlag;Boolean fFiller;size.sub.-- t fSize;TIterator* fIterator;TClass* fClass;______________________________________ Copyright Apple Computer 1991-1993 Reset--starts the iteration over from the beginning. Next--gets the next derived class in the list. IterationComplete--returns true if Next has been called for all derived classes of the given base class. GetClassID--returns the class id of a derived class (fClassID). GetClassNewObjectFlag--returns true if the class id returned by GetClassID can be used to create an object using the NewObject function. GetClassPreloadFlag--returns true if the class is preload flag is set to preload the class implementation at boot time. GetClassSize--returns the size of the data structure for an instance of the class. GetLibrary--returns the TLibrary for the class. GetLibraryFile--returns the TLibraryFile for the class. GetVersion--returns the current version of the class. GetMinVersion--returns the minimum compatible version of the class. The algorithm for TClassInfo includes the following steps. 1. When the TClassInfo is created the fBaseClassID field is set to the class id passed to GetClassInfo, the fIterator field is set to a class catalog iterator which iterates the TClass records registered in the class catalog, and fClass is set to the TClass for the class corresponding to fBaseClassID. 2. The function Next sets the data members fClassID, fLibrary, fLibraryFile, fVersion, fMinVersion, fNewObjectFlag, fFunctionSetFlag and fSize. It gets this information from the next TClass record, using the fIterator, which has the desired "is derived from" relationship of the class given by fBaseClassID. 3. The "getter" functions listed above return the information in the corresponding data member of TIterator. 4. The first time the getter functions are called, or after Reset is called, the information returned is for the fBaseClassID class itself. Source code for the cast object, verify class, new object and get class info routines is provided in the Appendix. Also, the Appendix provides selected class interfaces and functions which may be helpful in understanding a particular implementation of the present invention, when considered with the detailed description of the run-time architecture which follows. A more detailed description of a particular implementation of the shared library manager is provided below with reference to a code written for the Macintosh™ computer. Overview The Shared Library Manager (SLM) described herein provides dynamic linking and loading facilities for the 68K Macintosh. The system can be adapted to any platform desired by the user. SLM provides dynamic loading, a.k.a. on-demand loading, both for procedural programs and for C++ programs. This is different from the traditional approach of launch-time loading, also referred to as "full transitive closure", which means that all required libraries are loaded and relocated at once when an application is launched. SLM provides procedural programming support for exporting and importing functions from C, Pascal, Assembler, or any language with compatible calling conventions. In addition, SLM provides extensive support for exporting and importing C++ classes, with an emphasis on providing the dynamic features which are fundamental to building extensible applications and system components for Macintosh. Shared libraries can be installed at any time and called in a transparent fashion. There is no dependency when building a client on where the libraries that it uses are, what the filename(s) are, or how many of them may eventually be used during a session. The libraries are loaded and unloaded dynamically based on use counts. The SLM takes care of all of the details of binding and loading, and the client only has to know the interfaces to the classes and functions that it wants to use. SLM provides facilities for instantiating objects by class name, for enumerating a class hierarchy, and for verifying the class of an object or the class of a class. Basis of the Architecture The architecture for SLM is based on the 68K run-time architecture and the MPW tools architecture. A build tool, for the SLM takes an exports file (.exp) and an MPW object file (.o) as input and generates a library file. It does this by processing these files and then calling the MPW linker to finish the job. For C++ libraries, SLM is based on the MPW/AT&T 2.1 CFront v-table dispatching model. Development tools must generate v-tables compatible with this model. In particular, we assume that the first entry in the v-table is not used--SLM uses it to point to class "meta-data". SLM also requires the v-table pointer to be the first data member in an object in order for certain functions to work. Since these functions do not know the class of the object (their job is to find out the class) they can only find the class meta-data via the v-table pointer if it is in a known place in the object. It is theoretically possible for SLM to support more than one dispatching model, however for interoperability it would not be desirable to have more than one model. A dispatching agent could arbitrate between different dispatching models or even different calling conventions which would introduce considerable run-time overhead. It would be possible for different dispatching models to co-exist and yet not interoperate. SLM and 68K libraries support both single (SI, SingleObject rooted) and multiple inheritance (non-SingleObject rooted) classes. A development tool wishing to support the SLM in a non-MPW environment will need to generate a `libr` resource, a set of code resources, and optionally, a `libi` resource, a shown in FIG. 7. The jump table resource is not modified at build time--it is a normal model far jump table. It is modified by SLM at load time but the build tools don't need to be aware of this. The glue that does 32-bit relocation and the data initialization glue are code libraries statically linked into the library initialization segment. Shared libraries have a jump table resource (`code` 0) plus an initialization code segment (`code` 1) plus at least one implementation segment (`code` 2 and up). If more than one library is in a library file then the resource type for each set of code resources must be unique (usually `cd01`, `cd02` etc.). Library resource--Specifies the type of the `code` resources, and the classes and function sets exported by this library. Of course, the resource ID for each `libr` resource in a file must be unique. Jump Table resource--Always present in the file, present in memory only if -SegUnload option is used to SLMBuilder. Note that NoSegUnload is the default. Initialization code segment resource--Only contains code linked to %A5Init or A5Init segment and used at initialization time, must not contain any other code unless -SegUnload option is used to SLMBuilder. Implementation code segment(s)--Contains any implementation code for the library, including the CleanupProc if any. Often libraries will have only one implementation segment although there is no harm in having more. Implementation segments are numbered 2 and up. When the SLM loads a library, the jump table resource is loaded, plus the initialization segment. After initialization this segment is released--it will not be needed again unless the library is unloaded and then reloaded. If the "NoSegUnload" option is used (NoSegUnload option to LibraryBuilder is the default) then all code segments are loaded. The jump table plus the relocation information generated for each segment is used to relocate the jump table based addresses in the code to absolute addresses pointing to the target function in the target code resource. The jump table resource is then released. When using this option the library should normally have only one implementation segment but this is not a requirement. If the "SegUnload" option is used (SegUnload option to LibraryBuilder) then the implementation segment is loaded, plus any segments designated preload (preload resource bit can be set on a per-segment basis from a statement in the .r file), or all segments are loaded if a library itself is designated preload (preload flag in the Library descriptor). Each jump table based reference in the code is relocated to point to the absolute address of the jump table entry. The jump table is modified to contain pc-relative jsr instructions instead of segment loader traps. The jsr goes to the SLM segment loader entry point just before the jump table (in the 32 byte gap). This limits the size of the jump table to 32K, which should not be a problem since libraries should not be huge bodies of code--library files can easily have multiple libraries in them if packaging is an issue. It is a jsr and not a jmp since the segment loader uses the return address, which now points to the last two bytes of the jump table entry, to compute which segment is to be loaded. At load time the SLM moves the segment number information out of the jump table entry to an array following the jump table. This is due to the fact that the jsr pc relative takes 2 more bytes than the original segment loader trap. Note that all the machinations on the jump table are transparent to a tool developer. A segment is loaded when a jump table (jt) entry is called and the jump table is then modified to contain an absolute jump for all jt entries for that segment. When segments are unloaded, the pc relative jsr is put back in the jump table for each entry for that segment. The data segment based addresses (the "global world", i.e. what would have been A5 based) are relocated to the absolute address of the data. Building a Shared Library Any application, extension, driver, or other stand-alone code resource on the Macintosh can use shared libraries. Of course, a client can also be another shared library. A shared library can import and export functions or classes. A non-library client (for brevity, an "application") can only import functions or classes. An application can sub-class an imported class, but it cannot export a class. This is a minor limitation since an application can contain shared libraries as resources in the application itself, or register a special folder containing shared libraries that are effectively part of the application. The Shared Library Manager accomplishes it's sharing by examining and modifying the object file destined to become a shared library. It generates several source and resource files that are then used to create the shared library and to provide clients with linkable "stubs" to gain access to the functions and classes in the shared library. In the sections which follow, we will examine each of these files. Dispatching Architecture First and foremost of the files generated by the SLM LibraryBuilder tool is the file SharedLibTemp.stubs.a. This is an assembly language file that contains the stubs for all of the functions that are exported. Exported functions fall into five categories. They are: 1) function set function; 2) class constructor; 3) class destructor; 4) class virtual functions; and 5) class non-virtual functions. There is a separate dispatching function for each of these classifications, and the stubs are generated slightly different for each. Each one will be examined in detail shortly. For the purposes of explaining the architecture, an exported function in a function set will be used. For each dispatching function, there are 5 parts. The first two parts are supplied by the build tool, and are unique for each function. The other 3 parts are supplied by the SLM. The first part is a structure called a stub record (see FIG. 6). ______________________________________.sub.-- stbSample RECORD EXPORTIMPORT .sub.-- CVRExampleFSet:DataDC.L 0DC.L .sub.-- CVRExampleFSetDC.L 0DC.W 3ENDR______________________________________ Copyright Apple Computer 1991-1993 The stub record contains 2 fields used internally by the Shared Library Manager (the first and third fields in the example shown above). The second field is a pointer to the ClientVTableRec. The ClientVTableRec is a structure (see FIG. 6) that is linked with the application that has all of the information that the SLM needs to dynamically find and load the function set or class referenced by the stub. This is the "C" definition of the ClientVTableRec: ______________________________________TLink* fClassLinkPtr;long fClassSerialNumber;long fCodeSerialNumber;short fVersion;short fMinVersion;char fClassIDStr[2];};______________________________________ Copyright Apple Computer 1991-1993 The fClassLinkPtr field contains information used by the SLM to cache the link to the internal TClass object which contains all of the information known about the class or function set. The two serial number fields are used to insure that the cached information is still valid (if the library unloaded and a new version of a class or function set in the library was dragged into a registered folder, any cached information is invalid). The version number fields contain information about the version number of the class or function set that your application (code resource, extension, etc.) linked with, and the fClassIDStr field contains the actual ClassID of the class or function set so that it can be found in the SLM catalog. The last field in the stub record is an index. It tells the SLM which entry in the function set VTable contains a pointer to the desired function. Most stub dispatching comes in 3 speeds--very fast, fast, and slow. Very fast dispatching occurs when you have already used the function before. In this case, the first field of the stub record contains the address of the actual function, and can be called immediately. If you have never called this function before, then a dispatching stub is called which causes the SLM to examine the ClientVTableRec. If you have already used another function in this same class or function set, then the ClientVTableRec already contains cached information as to the location of the tables containing the function addresses. In this case, your stub is updated so that the address of the function is cached for next time, and the function is then called. If the ClientVTableRec does not contain cached information, then the SLM must look up the class or function set by the ClassID stored in the ClientVTableRec, load the library containing the class or function set if it is not already loaded, update the cached information in the ClientVTableRec, update the cached information in the stub, and then finally call the function. The second part of the dispatching mechanism is the actual stub code that a client links with: __________________________________________________________________________Sample PROC EXPORTIMPORT .sub.-- stbSample:DataIF MODEL = 0 THENlea .sub.-- stbSample,a0 ;Get the stub record into a0ELSElea (.sub.-- stbSample).L,a0ENDIFmove.I (a0),d0 ; Get the cached function addressbeq.s @1 ; Not cached - do it the hard waymove.I d0,a0 ; Get address into a0jmp (a0) ; and jump to itIMPORT .sub.-- SLM 1 1 FuncDispatch@1IF MODEL = 0 THENjmp .sub.-- SLM 1 1 FuncDispatch ; More extensive dispatching neededELSEjmp (.sub.-- SLM 1 1 FuncDispatch).LENDIF MACSBUG = 1 THENrtsDC.B $80, $06DC.B SampleDC.W 0ENDIFENDP__________________________________________________________________________ Copyright Apple Computer 1991-1993 Normally two (or maybe four) versions of the stub are assembled--one in model near (MODEL=0), and one in model far (MODEL=1). There may also be debugging versions generated with Macsbug symbols for each (MACSBUG=1). Notice that the first thing that the stub does is to check whether the first field of the stub record is non-zero. If it is, it just jumps through the pointer stored there. Otherwise, it calls a first level dispatching function called -- SLM11FuncDispatch (or one of the other 4 variations of this function). This function is the third part of the dispatching code: ______________________________________ The first two instructions fetch the SLM globals. The SLM stores it's global information in a low-memory structure known as the ExpandMemRec (location $2B6 in the Macintosh contains a pointer to this structure). The SLM global information is stores as a pointer at offset $10C in this structure. One of the fields in this structure contains a vector of dispatch functions. This dispatcher uses the 5th element in the vector to dispatch through (this is the vector to dispatch functions in function sets). The code for this dispatching function (and the other four variations) are supplied in LibraryManager.o and LibraryManager.n.o, so clients can link with it. The fourth part of stub dispatch is the second level dispatching function that is stored in this vector: ______________________________________SLMFuncDispatch PROC Exportmove.1 d0,-(sp) ; Push the library managermove.1 a0,-(sp) ; Push the stub recordmove.1 d0,a0 ; Put TLibraryManager into a0move.1 20(a0),a0 ; Get the class catalogmove.1 a0,-(sp)move.1 (a0),a0move.1 CatVTable.LookupFunction(a0),a0jsr (a0) ; Call the LookupFunction methodlea 12(sp),sp ; Drop parametersmove.1 d0,a0 ; Get the functionjmp (a0) ; Call it______________________________________ Copyright Apple Computer 1991-1993 This dispatcher calls a function in the TClassCatalog object, which is the fifth and final part of stub dispatch. A partial declaration for TClassCatalog follows. __________________________________________________________________________#define kTClassCatalogID "!$ccat, 1.1"typedef ClientVTableRec CVR;typedef FunctionStubRec FSR;typedef void (*VF)(void);typedef VTable VF[ ];class TClassCatalog : public TDynamic TClassCatalog( );virtual ˜TClassCatalog( );//// These first functions are here for backward compatibility with// SLM 1.0//#if COMPAT10virtual VR ExecDestructor(CVR*) const;virtual VR ExecConstructor(CVR*, size.sub.-- t idx) const;virtual VR ExecVTableEntry(CVR*, size.sub.-- t idx) const;virtual VR ExecExportEntry(CVR*, size.sub.-- t idx) const;virtual void ExecFunction(FSR*, TLibrary Manager*) const;virtual VR ExecRootDestructor(CVR*) const;virtual VR ExecRootConstructor(CVR*, size.sub.-- t idx) const;virtual VR ExecRootVTableEntry(CVR*, size.sub.-- t idx) const;virtual VR ExecRootExportEntry(CVR*, size.sub.-- t idx) const;#endifvirtual VF LookupFunction(FSR*, TLibraryManager*);virtual VTableRec* GetVTableRecMemory(size.sub.-- t num) const;virtual VTableRec* Init1VTableRec(VTableRec*, ProcPtr setupProc, CVR* client) const;virtual VTableRec* InitVTableRec(VTableRec*, VTable vTable, VTable exportTable, CVR* parent, long size, char*) const;virtual VTableRec* InitVTableRec(VTableRec*, VTable exportTable, char*) const;#if COMPAT10virtual VTableRec* GetGenVTableRec(CVR*) const;#endifvirtual VTableRec* LookupVTableRec(CVR* theClient) const;virtual VTableRec* GetVTableRec(CVR*, Boolean isSub) const;virtual VTableRec* ReleaseVTableRec(CVR*) const;virtual OSErr InitLibraryManager(TLibraryManager**, long*, size.sub.-- t poolsize, ZoneType theZone, MemoryType theMemType);virtual Boolean CleanupLibraryManager(TLibraryManager**);virtual VF GetDestructor(FSR*, TLibraryManager*) const;virtual VF GetConstructor(FSR*, TLibraryManager*, Boolean isSub) const;virtual VF GetVTableEntry(FSR*, TLibraryManager*) const;virtual VF GetExportEntry(FSR*, TLibraryManager*) const;virtual VTableRec* GetParentVTableRec(CVR*) const;};__________________________________________________________________________ Copyright Apple Computer 1991-1993 There are five routines in the TClassCatalog, corresponding to each of the five dispatch methods. They are: 1) LookupFunction; 2) GetConstructor; 3) GetDestructor; 4) GetVTableEntry; and 5) GetExportEntry. It is not necessary that any generated code know vtable offsets into the class catalog. These offsets are known by the vectorized dispatch code. In fact, the dispatch code was vectorized specifically so that offsets in the TClassCatalog could change without causing recompilation of clients. This fifth and final dispatch routine does the actual finding of the class, and storing of any cached values. In the code examples that follow, code and structures that have a bold header and trailer is code that must be generated by a build tool in order to create a shared library. Code that is not in bold is shown only for reference. Generating Function Set Code To import a function a client has to include the interface file when compiling (in C, the .h file), and link with client object file (.cl.o or .cl.n.o) provided by the library developer. The client object file contains the client stubs for the functions that the client calls. Consider the following example function set which is exported by the ExampleLibrary. This is the FunctionSet declaration in the exports file (ExampleLibrary.exp): __________________________________________________________________________FunctionSet ExampleFSet id = kExampleFunctionSet;////We could use the following export line, but we want all exported//functions from the library to be exported, so we say nothing!//// exports = Hello, extern HelloC, pascal extern HelloPascal, Goodbye,// pascal GoodbyePascal, TExampleClass::Test;};__________________________________________________________________________ Copyright Apple Computer 1991-1993 These are the prototypes from the interface file (ExampleLibrary.h): ______________________________________char* Hello(ulong&);char* Hello(ulong*);extern "C" char* HelloC(ulong*);ulong Goodbye( );pascal Ptr HelloPascal(ulong& theHelloTicks);pascal ulong GoodbyePascal( );______________________________________ Copyright Apple Computer 1991-1993 Function Set Function Dispatching (Stubs) The build tool, LibraryBuilder, generates stubs to be linked with the client for each FunctionSet function exported by a given library. A stub is generated for each function. Here is the stub record and the stub that is generated for the "HelloC" function (in SharedLibTemp.stubs.a): ______________________________________.sub.-- stbHelloC RECORD EXPORTIMPORT .sub.-- CVRExampleFSet:DataDC.L 0DC.L .sub.-- CVRExampleFSetDC.L 0DC.W 3ENDRHelloC PROC EXPORTIMPORT .sub.-- stbHelloC:DataIF MODEL = 0 THENlea .sub.-- stbHelloC,a0ELSElea (.sub.-- stbHelloC).L,a0ENDIFmove.I (a0),d0beq.s @1move.I d0,a0jmp (a0)IMPORT .sub.-- SLM 1 1 FuncDispatch@1IF MODEL = 0 THENjmp .sub.-- SLM 1 1 FuncDispatchELSEjmp (.sub.-- SLM 1 1 FuncDispatch).LENDIF MACSBUG = 1 THENrtsDC.B $80, $0BDC.B `stub.sub.-- HelloC`DC.W 0ENDIFENDP______________________________________ Copyright Apple Computer 1991-1993 This is the structure of the stub record in C: ______________________________________struct FunctionStubRecVirtualFunction fFunction;ClientVTableRec* fClientVTableRec;FunctionStubRec* fNextStubRec;unsigned short fFunctionID;};______________________________________ Copyright Apple Computer 1991-1993 When the stub is called, it first checks to see if the address of the function is already cached. If so, it jumps immediately to the function. Otherwise, it jumps to the function dispatcher (-- SLM11FuncDispatch) with the stub record as a parameter in register A0. The stub dispatcher then obtains the address of the actual dispatching code from a low-memory global set up when SLM was originally loaded, and it jumps to that code. This extra indirection allows SLM the flexibility to modify the function loading and dispatching mechanism without affecting client code (since clients are linked with the -- SLM11FuncDispatch code). This is the function dispatcher that is linked with the client. ______________________________________ This is the actual function dispatcher that is linked with the Shared Library Manager for reference: ______________________________________.sub.-- SLMFuncDispatch PROC Exportmove.I d0,-(sp) ; Push the library managermove.I a0,-(sp) ; Push the stub recordmove.I d0,a0 ; Put TLibraryManager into a0GetClassCatalog a0 ; Get the class catalog into a0move.I a0,-(sp) ; Push itmove.I (a0),a0 ; Get the VTablemove.I CatVTable.LookupFunction(a0),a0 ; Get the functionjsr (a0) ; Call itlea 12(sp),sp ; Pop the parametersmove.I d0,a0 ; Get the function addressjmp (a0) ; jump to itENDP______________________________________ Copyright Apple Computer 1991-1993 The CatVTable.LookupFunction is a call to the TClassCatalog::LookupFunction method. A ClientVTableRec is generated per function set (to be linked with the client in SharedLibTemp.stubs.a): ______________________________________.sub.-- CVRExampleFSet RECORD EXPORT DC.L 0, 0, 0 DC.W $0100 DC.W $0000 DC.B `appl:exam$ExampleFSet`, 0 ENDR______________________________________ Copyright Apple Computer 1991-1993 Function Set Function Dispatching (Initialization) The initialization code is generated in two files: SharedLibTemp.init.a and SharedLibTemp.init.c. Most of the generated code is in "C", but for each class or function set, a ClientVTableRec is generated which is put into the assembly file. It is possible to put the ClientVTableRec definition in "C" (the inline "C"-string makes it a little more challenging, but it can be done), but we chose to leave it in assembly. Strictly speaking, if your initialization code is going to link with your stub library, a ClientVTableRec does not need to be generated for initialization, but it makes things easier just to generate one as a matter of course. The vector table is generated into SharedLibTemp.Init.c (to be linked with the library): ______________________________________typedef void (*ProcPtr)(void);ProcPtr.sub.-- vtbI.sub.-- ExampleFSet[ ] =0, // First entry is normally 0 The SLM allows functions to be exported by name, as well. If any functions are exported by name, one more structure is created. Assuming that the HelloC routine and the GOODBYEPASCAL routine were to be exported by name, the following extra code would be generated: ______________________________________static char .sub.-- Str.sub.-- Sample.sub.-- 2[ ] = "HelloC";static char .sub.-- Str.sub.-- Sample.sub.-- 2[ ] = "GOODBYPASCAL";char* SampleNameStruc[ ] =(char*)-1L,(char*)-1L,.sub.-- Str.sub.-- Sample.sub.-- 1,(char*)-1L,.sub.-- Str.sub.-- Sample.sub.-- 2,};______________________________________ Copyright Apple Computer 1991-1993 Any slot corresponding to a function that is not exported by name is filled with a (char*)-1L. A slot with a NULL (0) pointer terminates the list of names. A pointer to this list of names is stored in the first entry of the vector table: ______________________________________ProcPtr.sub.-- vtbl.sub.-- ExampleFSet[ ] = (ProcPtr)SampleNameStruc, An initialization function is generated for each function set. First, let's show you the header generated for the initialization file (generated in SharedLibTemp.init.c): ______________________________________typedef struct ClientVTableReclong l1;long l2;long l3;short v1;short v2;char name[ ];} ClientVTableRec;void Fail(long err, const char* msg);extern void** .sub.-- gLibraryManager; //A convenient lie?extern void.sub.-- pure.sub.-- virtual.sub.-- called(void);typedef void (*ProcPtr)(void);typedef struct.sub.-- mptr { short o; short i; ProcPtr func; } .sub.--mptr;typedef ProcPtr* VTable;typedef void (*SetupProc)(void*, unsigned int);typedef void* (*Init1VTableRec)(void*, void*, SetupProc, ClientVTableRec*);typedef void* (*InitVTableRec)(void*, void*, VTable, VTable, ClientVTableRec*, unsigned int, char*);typedef void* (*GetVTableRec)(void*, ClientVTableRec*, unsigned char);typedef void* (*ReleaseVTableRec)(void*, ClientVTableRec*);typedef void* (*InitExportSet)(void*, void*, VTable, char*);typedef void* (*GetVTableMemory)(void*, unsigned int);#define GetClassCatalog (.sub.-- gLibraryManager[5])______________________________________ Copyright Apple Computer 1991-1993 Now the SVR (SVR stands for Setup VTableRec) function (generated in SharedLibTemp.init.c): ______________________________________#pragma segment A5Initvoid.sub.-- SVRExampleFSet(void* vtRec, unsigned int vtRecSize)register ProcPtr toCall;void** catalog = GetClassCatalog; // Get Class Catalogvoid** ctVTable = *catalog; // Get it's VTable//// Get pointer to 22nd entry, which is InitExportSet// function//toCall = (ProcPtr)ctVTable[22];//// Call it with the following parameters:// 1) The pointer to the class catalog// 2) The vtable record pointer passed in to your function// 3) A pointer to the vtable we created for the function set.// 4) A zero for future expansion//(*(InitExportSet)toCall)(catalog,vtRec, .sub.-- vtbl.sub.-- ExampleFSet, 0);}______________________________________ Copyright Apple Computer 1991-1993 The first parameter to the SVR function is known as the VTableRec. This is the definition of the VTableRec structure. This information is for reference only. ______________________________________struct VTableRecVTable fVTablePtr;VTable fExportTable;TUseCount fUseCount; // "long" in sizevoid* fReserved;ClientVTableRec* fParentClientPtr;unsigned short fSize;unsigned fParentIsVirtual:1;unsigned fFiller:1;unsigned fCanNewObject:1;unsigned fIsFunctionSet:1;unsigned fIsRootClass:1;unsigned fLocalCVRIsParent:1;unsigned fIsParentVTable:1;unsigned fIsMasterOrLast:1;unsigned char fDispatcher;ClientVTableRec* fLocalClientPtr;union{SetupProc fSetup;TRegisteredObjects* fRegisteredObjects;};};______________________________________ Copyright Apple Computer 1991-1993 The VTableRec is used internally by the SLM to keep track of information about each function set or class. Library initialization code supplies all of the information the SLM needs to fill in the VTableRec. One initialization function per library is generated which calls the SLM to register all of the ClientVTableRecs and SVR functions. This example shows initialization for the example library, which has the function set ExampleFSet and the class TExampleClass. Those parts of this code that you would change depending on what you are initializing are underlined. ______________________________________#pragma segment A5Initvoid*.sub.-- InitVTableRecords(void)register ProcPtr toCall;void* vtRec;void* savedRec;void** catalog = GetClassCatalog; // Get Class Catalogvoid** ctVTable = *catalog; // Get it's VTable//// Get the pointer to GetVTableMemory, and call it, asking// for memory for 2 VTableRecs (Of course, you would ask// for the number of VTableRecs that you need).//toCall = (ProcPtr)ctVTable[19];savedRec = (*(GetVTableMemory)toCall)(catalog,2);//// Get the pointer to the Init1VTableRec method//toCall = (ProcPtr)ctVTable[20];//// Start `vtRec` pointing to the memory that we got.//vtRec = savedRec;//// Call Init1VTableRec for our first export (a Class, in this// case). Parameters are:// 1) The class catalog// 2) The VTableRec Memory// 3) A pointer to the "SetupVTableRec" procedure// 4) A pointer to the local "ClientVTableRec"// The method will return a pointer to the next// VTableRec for use with additional initialization.// Using this method, it is not required for you to "know" the size of a VTableRec//vtRec = (*(Init1VTableRec)toCall)(catalog, vtRec, .sub.-- SVRTExampleClass, &.sub.-- CVRTExampleClass); //// Do it for the next export (a function set, in this example)//vtRec = (*(Init1VTableRec)toCall)(catalog, vtRec,.sub.-- SVRExampleFSet, &.sub.-- CVRExampleFSet); //// Return a pointer to the original VTableRec//return savedRec;}______________________________________ Copyright Apple Computer 1991-1993 Singly-Inherited Classes (Stubs) This section will examine the stub creation for a C++ class which is singly-inherited. This includes both forms of vtables--SingleObject derived, and standard AT&T v2.1 classes. Classes which are multiply-inherited (derive from more than 1 parent class) are handled in a separate section. Here is the definition of the TExampleClass (from ExampleClass.h) ______________________________________#define kTExampleClassID "appl:exam$TExampleClass,1.1"class TExampleClass : public TDynamicpublic: TExampleClass( );virtual ˜TExampleClass( );// New Methodsvirtual char* GetObjectName( ) const;virtual void SetObjectName(char *theName);virtual void DoThisAndThat( );virtual void DoThat( );virtual void SetGlobalInt(long theValue);virtual long GetGlobalInt( );// Public non-virtual function// Dynamically exported by using ExportFunctionvoid GetGlobalRef(long*&);// Public static function// Dynamically exported by the ExportFunctionstatic Boolean Test(ulong test);static Boolean Test(char* test);private:char *fName;// static gExampleClassCount counts the number of instancesstatic long gExampleClassCount;};______________________________________ Copyright Apple Computer 1991-1993 Constructor Stubs Constructor stubs are generated for each constructor of a class. If the export file specifies that no constructor stubs are to be exported (using the noMethodStubs or noStubs flag), then the constructor stubs must be created in the assembly language portion (SharedLibTemp.Init.a) of the initialization file. This is so that use counts can be properly maintained for every class. A stub record is generated to be linked with both the client and the library (in SharedLibTemp.stubs.a or SharedLibTemp.init.a, as appropriate). This record is used much like the record for a function set--It contains cached information that is used to speed up the dispatching. In addition, it contains the index into the "structor table" for the actual address of the constructor (see the class initialization section for information on creating this table). ______________________________________.sub.-- stb.sub.-- ct.sub.-- 13TExampleClassFv RECORD EXPORTIMPORT .sub.-- CVRTExampleClass:DataDC.L 0DC.L .sub.-- CVRTExampleClass ; ClientVTableRecDC.L 0DC.W 3 ; Index into tableENDR______________________________________ Copyright Apple Computer 1991-1993 A constructor stub is generated to be linked with both the client and the library (in SharedLibTemp.stubs.a or SharedLibTemp.init.a, as appropriate): ______________________________________.sub.-- ct.sub.-- 13TExampleClassFv PROC EXPORTIMPORT .sub.-- stb.sub.-- ct.sub.-- 13TExampleClassFv:DataIF MODEL = 0LEA .sub.-- stb.sub.-- ct.sub.-- 13TExampleClassFv,a0ELSELEA (.sub.-- stb.sub.-- ct.sub.-- 13TExampleClassFv).L,a0ENDIFIMPORT .sub.-- SLM11ConstructorDispatchIF MODEL = 0JMP .sub.-- SLM11ConstructorDispatchELSEJMP (.sub.-- SLM11ConstructorDispatch).LENDIFIF MACSBUG = 1 THENrtsDC.B $80, $1CDC.B `stub.sub.-- ct.sub.-- 13TExampleClassFv`DC.B 0DC.W 0ENDIFENDP______________________________________ Copyright Apple Computer 1991-1993 Notice that the constructor does not use any cached information to jump to the constructor directly. This is so that use counts may be updated (see the next bullet point). This is the constructor function dispatcher that is linked with the client. This is for reference only--the client should link with the version supplied by SLM. __________________________________________________________________________.sub.-- SLM11ConstructorDispatch PROC ExportWITH FunctionStubRectst.l (a0) ; Constructor pointeraddq.l #1,VTRec.useCount(a1) ; Increment the use countbeq.s @1 ; Have to do the library count?move.l (a0),a0 ; No - go onjmp (a0) ; Jump to the constructor; Here, we have to bump the libraries use count - so we; call the SLMConstructorDispatch function - but first, we; have to back off the VTableRec use count we just incremented;@1 subq.1 #1,VTRec.useCount(a1) ; Back off the VTableRec use count@2move.l $2B6,a1move.l $10C(a1),a1move.l SLMGlobal.fStubHelp + 4(a1),a1IF MODEL = 0 THENmove.l .sub.-- gLibraryManager,d0ELSEmove.l (.sub.-- gLibraryManager).L,d0ENDIFjmp (a1)ENDP__________________________________________________________________________ Copyright Apple Computer 1991-1993 Under normal circumstances, the bumping the use count of the VTableRec is all that is required, so dispatching is moderately fast. The only time that the more general dispatching mechanism needs to be invoked is if the constructor has never been invoked by the current client before, and there are no outstanding instances of this class among all clients of the SLM. Of course, if your compiler has a different naming convention for constructors, you must generate stubs that have the same name as the ones generated (See the initialization section for more information). This is the actual constructor function dispatcher that is linked with the Shared Library Manager. This is for reference only: ______________________________________SLMConstructorDispatch PROC Exportmove.l d0,a1 ; Save the library manager in a1moveq #1,d0; We test the "this" pointer to see if it's NULL. If it is, then; the object is being "newed", so we set d0 to 0, to tell the SLM; that we are fetching a primary object, and not a subclass. This; allows the SLM a little more latitude in version matching.;tst.l 4(sp) ; Check the "this" pointerbne.s @1moveq #0,d0@1 move.l d0,-(sp) ; Push the "isSubClass" flagmove.l a1,-(sp) ; Push the library managermove.l a0,-(sp) ; Push the stub recordmove.l 20(a1),a0 ; Get class catalog from lib mgrmove.l a0,-(sp) ; Push itmove.l (a0),a0 ; Get it's vtablemove.l CatVTable.GetConstructor(a0),a0jsr (a0) ; Call the GetConstructor entrylea 16(sp),sp ; Drop parametersmove.l d0,a0 ; Fetch constructor addressjmp (a0) ; Jump to itENDP______________________________________ Copyright Apple Computer 1991-1993 The CatVTable.GetConstructor is a call to the TClassCatalog::GetConstructor method. Destructor Stubs Destructor stubs are generated for the destructor of an exported class. If the export file specifies that the destructor stub are not to be exported (using the noMethodStubs or noStubs flag), then the destructor stub must be created in the assembly language portion (SharedLibTemp.Init.a) of the initialization file. This is so that use counts can be properly maintained for every class. The destructor stub is generated to be linked with both client and library (in SharedLibTemp.stubs.a or SharedLibTemp.init.a, as appropriate). Note that the stub record for a destructor is slightly smaller than one for a constructor or a normal function. This is because we do not need to store the index to the destructor in that table--it is always two (see the class initialization section for more information). ______________________________________.sub.-- stb.sub.-- dt.sub.-- 13TExampleClassFv RECORD EXPORTIMPORT .sub.-- CVRTExampleClass:DataDC.L 0DC.L .sub.-- CVRTExampleClass ; Client VTable RecDC.L 0ENDR.sub.-- dt.sub.-- 13TExampleClassFv PROC EXPORTIMPORT .sub.-- stb.sub.-- dt.sub.-- 13TExampleClassFv:DataIF MODEL = 0LEA .sub.-- stb.sub.-- dt.sub.-- 13TExampleClassFv,a0ELSELEA (.sub.-- stb.sub.-- dt.sub.-- 13TExampleClassFv).L,a0ENDIFIMPORT .sub.-- SLM11DestructorDispatchIF MODEL = 0JMP .sub.-- SLM11DestructorDispatchELSEJMP (.sub.-- SLM11DestructorDispatch).LENDIFIF MACSBUG = 1 THENrtsDC.B $80, $1CDC.B `stub.sub.-- dt.sub.-- 13TExampleClassFv`DC.B 0DC.W 0ENDIFENDP______________________________________ Copyright Apple Computer 1991-1993 Notice that the destructor does not use any cached information to jump to the destructor directly. This is so that use counts may be updated (see the next bullet point). This is the destructor function dispatcher that is linked with the client. This is for reference only--the client should link with the version supplied by SLM. The DestStub record looks just like a function stub record, except that it is one short shorter (it does not need the funcID field, it is known to be two (2)). __________________________________________________________________________DestStub Record 0function DS.L 1clientCVR DS.L 1nextPtr DS.L 1 ENDR.sub.-- SLM11DestructorDispatch PROC ExportWITH DestStubtst.l (a0) ; Destructor pointersubq.l #1,VTRec.useCount(a1) ; Decrement the use countbmi.s @1 ; Have to do the library count?move.l (a0),a0 ; No - go onjmp (a0) ; Jump to the destructor; Here, we have to bump the libraries use count - so we; call the SLMDestructorDispatch function - but first, we; have to back off the VTableRec use count we just incremented;@1 addq.l #1,VTRec.useCount(a1) ; Back off the VTableRec use count@2 move.l $2B6,a1 move.l $10C(a1),a1 move.l SLMGlobal.fStubHelp(a1),a1IF MODEL = 0 THENmove.l .sub.-- gLibraryManager,d0ELSEmove.l (.sub.-- gLibraryManager).L,d0ENDIFjmp (a1)ENDP__________________________________________________________________________ Copyright Apple Computer 1991-1993 Under normal circumstances, the decrementing of the use count of the VTableRec is all that is required, so dispatching is moderately fast. The only time that the more general dispatching mechanism needs to be invoked is if the destructor has never been invoked by the current client before, and we are destroying the last outstanding instance of this class among all clients of the SLM. This is the actual destructor function dispatcher that is linked with the Shared Library Manager. This function is the most complex of the dispatching functions. This is because calling the destructor of an object can cause a library to unload. Unloading only occurs when SystemTask is called. However, if the destructor were to call SystemTask, this might be highly embarrassing, so we go to some lengths to insure that the library cannot be unloaded until we return from the destructor. This is for reference only: ______________________________________SLMDestructorDispatch PROC ExportWITH SLMGlobalmove.l d0,-(sp) ; Push the library managermove.l a0,-(sp) ; Push the stub recordmove.l d0,a0 ; Put TLibraryManager into a0move.l 20(a0),a0 ; Get the class catalogmove.l a0,-(sp) ; Push itmove.l (a0),a0 ; Get the destructormove.l CatVTable.GetDestructor(a0),a0jsr (a0) ; Call class cataloglea 12(sp),sp ; Drop parameters; Here, we're being possibly being unloaded at the next; System Task. Just in case the destructor calls SystemTask,; we disable code unloading until the destructor is finished; running. We shift the stack down by 4 bytes so that we can; save a return address, and get returned to ourselves by; the destructor;GetSLMGlobaladdq.l #1,fNoSystemTask(a0) ; Keep system task from unloadingmove.l (sp),d1 ; Save return addressmove.l 4(sp),(sp) ; Shift Parm 1move.l 8(sp),4(sp) ; Shift Parm 2move.l d1,8(sp) ; Save original return addressmove.l d0,a0jsr (a0) ; Call the destructorGetSLMGlobalsubq.1 #1,fNoSystemTask(a0) ; Let system task unload nowmove.l 8(sp),a0 ; Recover return addressmove.l 4(sp),8(sp) ; Put parms back where they belongmove.l (sp),4(sp)addq.l #4,sp ; Drop return address slotjmp (a0) ; And return to callerENDP______________________________________ Copyright Apple Computer 1991-1993 The CatVTable.GetDestructor is a call to the TClassCatalog::GetDestructor method. Virtual Method Stubs A virtual method stub is generated for each virtual function which may be linked with the client (in SharedLibTemp.stubs.a), unless the client specified NoStubs or NoVirtualStubs in the export declaration for the class. These stubs are only used by a client that creates stack objects since the compiler generates a direct call when the dot syntax is used. The compiler is "smart" and "knows" the class of the object so it does not generate a v-table indirect call. It would be better for SLM if the compiler had an option to generate v-table calls anyway. __________________________________________________________________________.sub.-- stbGetObjectName.sub.-- 13TExampleClassCFv RECORD EXPORTIMPORT .sub.-- CVRTExampleClass:DataDC.L 0DC.L .sub.-- CVRTExampleClassDC.L 0DC.W 9 ; Index in VTableENDRGetObjectName.sub.-- 13TExampleClassCFv PROC EXPORTIMPORT .sub.-- stbGetObjectName.sub.-- 13TExampleClassCFv:DataIF MODEL = 0LEA .sub.-- stbGetObjectName.sub.-- 13TExampleClassCFv,a0ELSELEA (.sub.-- stbGetObjectName.sub.-- 13TExampleClassCFv).L,a0ENDIFmove.l (a0),d0 ; Is function address cached!beq.s @1 ; No - do it the hard waymove.l d0,a0 ; Yes - get address into a0jmp (a0) ; and jumpIF MODEL = 0LEA .sub.-- CVRTExampleClass,a0ELSELEA (.sub.-- CVRTExampleClass).L,a0ENDIFIMPORT .sub.-- SLM11VTableDispatch@1IF MODEL = 0JMP .sub.-- SLM11VTableDispatchELSEJMP (.sub.-- SLM11VTableDispatch).LENDIFIF MACSBUG = 1 THENrtsDC.B $80, $26DC.B `stub.sub.-- GetObjectName.sub.-- 13TExampleClassCFv`DC.B 0DC.W 0ENDIFENDP__________________________________________________________________________ Copyright Apple Computer 1991-1993 Notice that for virtual method stubs, we can once again cache the actual address of the function and call it directly. This is the virtual function dispatcher that is linked with the client. ______________________________________.sub.-- SLM11VTableDispatch PROC Exportmove.l $2B6,a1move.l $10C(a1),a1move.l SLMGlobal.fStubHelp + 8(a1),a1IF MODEL = 0 THENmove.l .sub.-- gLibraryManager,d0ELSEmove.l (.sub.-- gLibraryManager).L,d0ENDIFjmp (a1)ENDP______________________________________ Copyright Apple Computer 1991-1993 This is the actual virtual function dispatcher that is linked with the Shared Library Manager. ______________________________________SLMVTableDVTableEntry(a0),a0jsr (a0) ; Call the GetVTableEntry entrylea 12(sp),sp ; Drop parametersmove.l d0,a0 ; Fetch function addressjmp (a0) ; Jump to itENDP______________________________________ Copyright Apple Computer 1991-1993 Non-Virtual Method Stubs Many classes use both virtual and non-virtual functions. Virtual functions allow subclasses to override the behavior of a class, while non-virtual functions do not. SLM supports stubs for the non-virtual methods to be generated (generated automatically unless noStubs or noMethodStubs is specified in the export declaration for the class). Note that SLM does not export static methods automatically in this manner. Static methods should normally be exported in a function set, since they do not require an instance of the class to be in existence in order for calling the method to be valid. A non-virtual member function stub is generated for each non-virtual member function (to be linked with the client in SharedLibTemp.stubs.a) __________________________________________________________________________.sub.-- stbGetGlobalRef.sub.-- 13TExampleClassFRPI RECORD EXPORTIMPORT .sub.-- CVRTExampleClass:DataDC.L 0DC.L .sub.-- CVRTExampleClassDC.L 0DC.W 0 ; Index into.sub.-- exptbl arrayENDRGetGlobalRef.sub.-- 13TExampleClassFRPI PROC EXPORTIMPORT .sub.-- stbGetGlobalRef.sub.-- 13TExampleClassFRPI:DataIF MODEL = 0LEA .sub.-- stbGetGlobalRef.sub.-- 13TExampleClassFRPI,a0ELSELEA (.sub.-- stbGetGlobalRef.sub.-- 13TExampleClassFRPI).L,a0ENDIFmove.l (a0),d0 ; Is Address cached?beq.s @1 ; No - do it the hard waymove.l d0,a0 ; Yes - get address into a0jmp (a0) ; and jumpIMPORT .sub.-- SLM11ExtblDispatch@1IF MODEL = 0JMP .sub.-- SLM11ExtblDispatchELSEJMP (.sub.-- SLM11ExtblDispatch).LENDIFIF MACSBUG = 1 THENrtsDC.B $80, $26DC.B `stub.sub.-- GetGlobalRef.sub.-- 13TExampleClassFRPI`DC.B 0DC.W 0ENDIFENDP__________________________________________________________________________ Copyright Apple Computer 1991-1993 This is the non-virtual function dispatcher that is linked with the client. ______________________________________.sub.-- SLM11ExtblDispatch PROC Exportmove.l $2B6,a1move.l $10C(a1),a1move.l SLMGlobal.fStubHelp + 12(a1),a1IF MODEL = 0 THENmove.l .sub.-- gLibraryManager,d0ELSEmove.l (.sub.-- gLibraryManager).L,d0ENDIFjmp (a1)ENDP______________________________________ Copyright Apple Computer 1991-1993 This is the actual non-virtual function dispatcher that is linked with the Shared Library Manager. ______________________________________SLMExtblDExportEntry(a0),a0jsr (a0) ; Call the GetExportEntry entrylea 12(sp),sp ; Drop parametersmove.l d0,a0 ; Fetch function addressjmp (a0) ; Jump to itENDP______________________________________ Copyright Apple Computer 1991-1993 Static Method Stubs Static methods should normally be exported by function sets, since they are not associated with an instance of an object. You can export a specific static method by specifying "exports=<ClassName>::<StaticMethod>" in your function set declaration, or you can export all static methods of a class by specifying "exports=static<ClassName>" in the declaration. Singly-Inherited Classes (Initialization) This section will examine the initialization code for a C++ class which is singly-inherited. This includes both forms of vtables--SingleObject derived, and standard AT&T v2.1 classes. Classes which are multiply-inherited (derive from more than 1 parent class) are handled in a separate section. ClientVTableRecord for TExampleClass: ______________________________________.sub.-- CVRTExampleClass RECORD EXPORTDC.L 0, 0, 0DC.W $0110DC.W $0110DC.B `appl:exam$TExampleClass`, 0ENDR______________________________________ Copyright Apple Computer 1991-1993 Of course the vtable for TExampleClass is generated by the compiler. The symbol for it is imported (SharedLibTemp.init.c): extern ProcPtr.sub.-- vtbl.sub.-- 13TExampleClass[]; The symbols for the non-virtual member functions (in this case GetGlobalRef) and the constructor and destructor are imported. The names of the constructor and destructor have been mangled with a `q` to distinguish them from the stubs. SLM modifies the object file that contains the constructor and destructor so that their names are different. This forces any reference to the constructor or destructor from within the library to also go through the stub. This is so that use counts can be maintained. If a library were to create an object, give it to the client, and have the client destroy the object, it would be possible for the library to unload even though there are outstanding references to it, since construction did not increment the use count, but destruction did. The stub for the constructor (the one without the `q`) is called by NewObject. The real constructors and destructors (the ones with the q mangling) are in the export table so they can be called from the stubs. The original symbols for the constructors and destructors in the object file (.o file) for the library are "q mangled" by the LibraryBuilder tool. ______________________________________extern void.sub.-- ct.sub.-- 13TExampleClassFv(void);extern void.sub.-- dtq.sub.-- 13TExampleClassFv(void);extern void.sub.-- ctq.sub.-- 13TExampleClassFv(void);extern void GetGlobalRef.sub.-- 13TExampleClassFRPI(void);______________________________________ Copyright Apple Computer 1991-1993 Two export tables are generated. One for the constructors and destructors (-- extbl-- <ClassName>) and one for non-virtual functions (-- exptbl-- <ClassName>). ______________________________________ProcPtr.sub.-- exptbl.sub.-- TExampleClass[ ] =(ProcPtr)GetGlobalRef.sub.-- 13TExampleClassFRPI};ProcPtr.sub.-- extbl.sub.-- TExampleClass[ ] ={(ProcPtr).sub.-- exptbl.sub.-- TExampleClass,(ProcPtr).sub.-- ct.sub.-- 13TExampleClassFv,(ProcPtr).sub.-- dtq.sub.-- 13TExampleClassFv,(ProcPtr).sub.-- ctq.sub.-- 13TExampleClassFv};______________________________________ Copyright Apple Computer 1991-1993 Note that the first entry in the "extbl" points to the "exptbl". This allows a later version of the library to have new non-virtual method exports and new constructors. The first 4 entries of the -- extbl-- are always fixed. They may be zero if the corresponding table or function does not exist. The second entry always points to the stub for the default constructor (the constructor with no arguments). The third entry always points to the "real" destructor, and the fourth entry always points to the "real" default constructor. Any entries after the fourth are for other versions of the constructors, and they always contain pointers to the "real" version (-- ctq--) of those constructors. The actual initialization code then needs to be generated. ______________________________________#pragma segment A5Initvoid.sub.-- SVRTExampleClass(void* vtRec, unsigned int VtRecSize)register ProcPtr toCall;void** cat = GetClassCatalog;void** ctVTable = *cat;void* parentVtRec;////Initialize the VTableRec for TExampleClass//{extern ProcPtr.sub.-- vtbl.sub.-- 13TExampleClass[ ];extern ClientVTableRec.sub.-- CVRTDynamic;toCall = (ProcPtr)ctVTable[21];////Call InitVTableRec.//vtRec = (*(InitVTableRec)toCall)(cat, vtRec, .sub.-- vtbl.sub.-- 13TExampleClass,.sub.-- extbl.sub.-- TExampleClass, &.sub.-- CVRTDynamic, 8, 0);}////If there is a shared parent, Get it's VTableRec//toCall = (ProcPtr)ctVTable[25];parentVtRec = (*(GetVTableRec)toCall)(cat,&.sub.-- CVRTDynamic, 1);/************************/* This code is OPTIONAL************************/////If there are functions inherited from the parent, let's//copy them. For a standard AT&T V2.1 vtable format, the//copy code would be modified appropriately.//{unsigned int idx;register VTable vtbl = *(VTable*)parentVtRec;register VTable myVTable = .sub.-- vtbl.sub.-- 13TExampleClass;myVTable + = 2;vtbl + = 2;for (idx = 0; idx < 7; ++idx){*myVTable++ = *vtbl++;}}}______________________________________ Copyright Apple Computer 1991-1993 This code does several things. The first is to initialize the VTableRec. The parameters to the InitVTableRec call are as follows: 1) A pointer to the class catalog, obtained from the GetClassCatalog macros. 2) A pointer to the VTableRec that was originally passed in to your initialization function. 3) A pointer to the vtable for the class 4) A pointer to the export table for the class 5) A pointer to the ClientVTableRec of the first "shared" parent class, looking backwards up the hierarchy. Normally, it is the ClientVTableRec of your parent class. However, if the parent class is not a shared class, then look at it's parent, and so on until you find a parent class that is shared. Leave this parameter NULL (zero) if no parent class is shared. 6) The size of the object, in bytes. This parameter is optional, and may be set to zero unless you want your class to be able to be instantiated with the NewObject function. If this is the case, you must supply the size as a negative number whose absolute value is the size of the object in bytes. 7) The last parameter is a flag to tell the SLM about the VTableRec. For singly-inherited classes, only two values are used. NULL (or 0) indicates that the class inherits from SingleObject or HandleObject and has a vtable format that is just an array of function pointers. A value of (char*)-1L indicates that the vtable has the format of an AT&T V2.1 vtable, which looks like: ______________________________________typedef struct.sub.-- mptrshort o;short i;ProcPtr func;}.sub.-- mptr;______________________________________ Copyright Apple Computer 1991-1993 Multiply-Inherited Classes (Stubs) This section will examine the stub creation for a C++ class which is multiply-inherited. What follows is a set of declarations for several classes, culminating in the definition of the class TMixedClass2. These are the classes that will be used to demonstrate the generation of code for multiply-inherited classes. ______________________________________#define kMMixin1ID "quin:test$MMixin1,1.1"class MMixin1 : public MDynamicprotected: MMixin1(int a); MMixinl( );virtual ˜MMixin1( );public:virtual int Add1(int a);virtual int Sub1(int a);int fFieldm;};#define kMMixin2ID "quin:test$MMixin2,1.1"class MMixin2 : public MDynamic{protected: MMixin2( ); MMixin2(int a);virtual ˜MMixin2( );public:virtual int Add2(int a);virtual int Sub2(int a);int fFieldm;};#define kMMixin3ID "quin:test$MMixin3, 1.1"class MMixin3 : public MDynamic{protected: MMixin3(int a); MMixin3( );virtual ˜MMixin3( );public:virtual int Add3(int a);virtual int Sub3(int a);int fFieldm;};////Non-shared class//class TMainClass : public TStdDynamic{public: TMainClass(int a);virtual ˜TMainClass( );virtual int Mul(int a);virtual int Div(int a);int fFieldt;};#define kTMixedClassID "quin:test$TMixedClass,1.1"class TMixedClass : public TMainClass, virtual public MMixin1, public MMixin2{public: TMixedClass(int a); TMixedClass( );virtual ˜TMixedClass( );virtual int Subl(int a);virtual int Div(int a);virtual int Add2(int a);};#define kTMixedClass2ID "quin:test$TMixedClass2, 1.1"class TMixedClass2 : public TMixedClass, virtual public MMixin1, public MMixin3{public: TMixedClass2(int a); TMixedClass2( );virtual ˜TMixedClass2( );virtual int Sub2(int a);virtual int Mul(int a);virtual int Add1(int a);virtual int Add3(int a);};______________________________________ Copyright Apple Computer 1991-1993 All stubs for multiply-inherited classes are generated in exactly the same way as for singly-inherited classes. This works because all inherited virtual functions are present in the primary vtable of the class. If you encounter a case where this is not true, SLM provides a way to deal with it. Set the high bit (0×8000) of the index value (for whichever vtable the virtual function is present in), and add a second short (two bytes) immediately after it in the stub record which is the index number of the VTableRec (0=primary, 1=next one, etc.). Multiply-Inherited Classes (Initialization) This section will examine the initialization code for a C++ class which is multiply-inherited. The situation here is much different than for singly-inherited classes. Consider: using the hierarchy above, if one has an MMixin1 object in hand, is it a true MMixin1 object, or is it a TMixedClass2 object cast to an MMixin1, or even a TMixedClass object cast to an MMixin1? Information needs to be made available about the offset back to the "main" object. This allows the SLM function CastObject to work properly with objects which are instances of multiply-inherited classes. This information is stored in the MMixin1 VTableRec, which is stored in the MMixin1 vtable. However, a different VTableRec is needed for an MMixin1 object which is "stand-alone", than for an MMixin1 object that is part of a TMixedClass2 object. This is because the offset to the "main" object is zero for a "stand-alone" object, and probably non-zero for one of many parents of a "main" object. This leads us to the conclusion that in order to initialize a multiply-inherited class, multiple VTableRecs will need to be initialized in order to keep things running smoothly. The initialization for classes MMixin1, MMixin2, and MMixin3 are exactly what you would expect for singly-inherited classes: ______________________________________extern ClientVTableRec.sub.-- CVRMMixin1;extern void.sub.-- ct.sub.-- 7MMixin1Fv(void);extern void.sub.-- dtq.sub.-- 7MMixin1Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin1Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin1Fi(void);ProcPtr.sub.-- extbl.sub.-- MMixin1[ ] =0,(ProcPtr).sub.-- ct.sub.-- 7MMixin1Fv,(ProcPtr).sub.-- dtq.sub.-- 7MMixin1Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin1Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin1Fi};#pragma segment A5Initvoid.sub.-- SVRMMixin1(void* vtRec, unsigned int vtRecSize){register ProcPtr toCall;void** catalog = GetClassCatalog;void** ctVTable = *catalog;void* parentVtRec;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin1[ ];toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall)(catalog, vtRec,.sub.-- vtbl.sub.-- 7MMixin1, .sub.-- extbl.sub.-- MMixin1, 0, 8,(char*)0×0001);}extern ClientVTableRec.sub.-- CVRMMixin2;extern void.sub.-- ct.sub.-- 7MMixin2Fv(void);extern void.sub.-- dtq.sub.-- 7MMixin2Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin2Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin2Fi(void);ProcPtr.sub.-- extbl.sub.-- MMixin2[ ] ={0,(ProcPtr).sub.-- ct.sub.-- 7MMixin2Fv,(ProcPtr).sub.-- dtq.sub.-- 7MMixin2Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin2Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin2Fi};#pragma segment A5Initvoid.sub.-- SVRMMixin2(void* vtRec, unsigned int VtRecSize)}register ProcPtr toCall;void** catalog = GetClassCatalog;void** ctVTable = *catalog;void* parentVtRec;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin2[ ];toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall)(catalog, vtRec,.sub.-- vtbl.sub.-- 7MMixin2, .sub.-- extbl.sub.-- MMixin2, 0, 8,(char*)0×0001);}extern ClientVTableRec.sub.-- CVRMMixin3;extern void.sub.-- ct.sub.-- 7MMixin3Fv(void);extern void.sub.-- dtq.sub.-- 7MMixin3Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin3Fv(void);extern void.sub.-- ctq.sub.-- 7MMixin3Fi(void);ProcPtr.sub.-- extbl.sub.-- MMixin3[ ] ={0,(ProcPtr).sub.-- ct.sub.-- 7MMixin3Fv,(ProcPtr).sub.-- dtq.sub.-- 7MMixin3Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin3Fv,(ProcPtr).sub.-- ctq.sub.-- 7MMixin3Fi};#pragma segment A5Initvoid .sub.-- SVRMMixin3(void* vtRec, unsigned int vtRecSize){register ProcPtr toCall;void** catalog = GetClassCatalog;void** ctVTable = *catalog;void* parentVtRec;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin3[ ];toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall)(catalog, vtRec,.sub.-- vtbl.sub.-- 7MMixin3, .sub.-- extbl.sub.-- MMixin3, 0, 8,(char*)0×0001);}______________________________________ Copyright Apple Computer 1991-1993 For all of these classes, the pointer to the parent ClientVTableRec is NULL, the sizes are 8 bytes long and are positive (indicating that they cannot be instantiated with NewObject), and the final parameter is a (char*)0×0001, indicating that the vtable format is the standard AT&T v2.1 format (remember that a 0 as this final parameter indicated that the vtable format was that of a class inherited from SingleObject) and that this is a primary vtable. Basically, the first byte of the pointer is a "dispatch code"--0 implies SingleObject dispatching, 1 implies AT&T v2.1 dispatching, and 2 implies SingleObject-type dispatching, but not in a SingleObject or HandleObject subclass. The next byte tells what kind of VTableRecord this is. Zero (0) means the primary vtable. One (1) indicates that it is a "parent" vtable. A "parent" vtable holds the ClientVTableRec of the parent class which the vtable inherits from. Note that this is not the same as the class of objects which we derive from. Two (2) indicates that it is a "virtual parent" vtable. A "virtual" parent inherits the vtable from a "parent" class exactly like a "parent" vtable, but the class from which we derive is a "virtual" base class. SLM assumes that for the secondary vtables that are generated, the first 8 bytes of the vtable are unused. TMainClass is a non-shared class, so no initialization (or stubs) are generated for it. The next class to generate initialization for is TMixed Class. As you look over the generated code, you will notice that you need to be able to figure out the names of all of the extra vtables that a multiply-inherited class generates, as well as which entries in the vtable belong to the class itself, and which are inherited. SLM also, dynamically copies function pointers from inherited vtables to avoid being linked with stubs. In order to keep the stubs from getting linked anyway, as the LibraryBuilder tool scans the vtables in the object file, it replaces calls to inherited functions with a call to -- pure-- virtual-- called. This keeps the linker from linking all of the stubs into the code. __________________________________________________________________________extern ClientVTableRec.sub.-- CVRTMixedClass;extern void.sub.-- ct.sub.-- 11TMixedClassFv(void);extern void.sub.-- dtq.sub.-- 11TMixedClassFv(void);extern void.sub.-- ctq.sub.-- 11TMixedClassFv(void);extern void.sub.-- ctq.sub.-- 11TMixedClassFi(void);ProcPtr.sub.-- extbl.sub.-- TMixedClass[ ] =0,(ProcPtr).sub.-- ct.sub.-- 11TMixedClassFv,(ProcPtr).sub.-- dtq.sub.-- 11TMixedClassFv,(ProcPtr).sub.-- ctq.sub.-- 11TMixedClassFv,(ProcPtr).sub.-- ctq.sub.-- 11TMixedClassFi};#pragma segment A5Initvoid.sub.-- SVRTMixedClass(void* vtRec, unsigned int vtRecSize){register ProcPtr toCall;void** cat = GetClassCatalog;void** ctVTable = *cat;void* parentVtRec;////Keep track of whether we got an error or not//int isOK = 0;////Keep track of where we are in initialization.//unsigned int index = 0;////Declare the needed viables and ClientVTableRecs//extern ProcPtr.sub.-- vtbl.sub.-- 11TMixedClass[ ];extern ClientVTableRec.sub.-- CVRTStdDynamic;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin2.sub.-- 11TMixedClass[ ];extern ClientVTableRec.sub.-- CVRMMixin2;extern ClientVTableRec.sub.-- CVRMMixin2;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin1.sub.-- 11TMixedClass[ ];extern ClientVTableRec.sub.-- CVRMMixin1;extern ClientVTableRec.sub.-- CVRMMixin1;////Initialization is done in a do { } while (false) loop//so that we can break out of the loop. "index" will//tell us how far we got if we break out with an error//do{////Start off with the primary vtable (.sub.-- vtbl.sub.-- 11TMixedClass)//and initialize it. The primary vtable is initialized with//the last parameter to InitVTableRec as (char*)0×0001, indicating//that this is an AT&T V2.1-format vtable, and that this is a//primary vtable.//Like it's singly-inherited counterpart, if the size parameter//is negative, it indicates that the class can be instantiated with//a "NewObject" call.//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall) (cat, vtRec, .sub.-- vtbl.sub.-- 11TMixedClass, .sub.-- extbl.sub.-- TMixedClass, &.sub.-- CVRTStdDynamic, 28, (char*)0×0001);////Now, let's get the parent. The parent vtable to//.sub.-- vtbl.sub.-- 11TMixedClass is the vtable for TMainClass.However,//TMainClass is a non-shared class. Since TMainClass is not//multiply-inherited, we just consider it's parent (TStdDynamic)//to be our parent and we get the parent VTableRec (this//forces TStdDynamic to stay loaded!)//toCall = (ProcPtr)ctVTable[33];parentVtRec = (*(GetVTableRec)toCall)(cat, &.sub.-- CVRTStdDynamic, 1);if (parentVtRec = = 0) break;////We increment index to indicate that, if something goes wrong,//we need to call ReleaseVTableRec on .sub.-- CVRTStdDynamic//index + = 1;/***Ordinarily, here we would copy all of our parent's methods that**we inherit. However, since our immediate parent is not shared,**copying methods from TStdDynamic would get us into trouble, since**any methods overridden by TMainClass would be lost. We content**ourselves with linking directly with any methods that TMainClass**overrides, and with stubs to any that it does not.*/////Now, let's initialize the VTableRec for one of the other//vtables. This VTable has the class MMixin2 as it's parent.//A (char*)0×0101 is passed as the last parameter to indicate that//this is a VTable for one of the non-primary "parents", but that//it is not a "virtual" parent class.//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall) (cat, vtRec, .sub.-- vtbl.sub.-- 7MMixin2.sub.-- 11TMixedClass, .sub.-- extbl.sub.-- TMixedClass, &.sub.-- CVRMMixin2, 28, (char*)0×0101);toCall = (ProcPtr)ctVTable[33];////Fetch the parent VTableRec to force the parent's code to load//parentVtRec = (*(GetVTableRec)toCall)(cat, &.sub.-- CVRMMixin2, 1);if (parentVtRec = = 0) break;////We increment "index" to indicate that, if something goes wrong//we need to call ReleaseVTableRec on.sub.-- CVRMMixin2 (and, of course,//also .sub.-- CVRTStdDynamic)//index + = 1;/************************/* This code is OPTIONAL************************/////Now, since we are inheriting from the class MMixin2, and it is//a shared class, we copy the pointers from the vtable for MMixin2//into our own vtable. The LibraryBuilder tool replaced all of//these entries that were inherited with a reference to//.sub.-- pure.sub.-- virtual.sub.-- called so that the linker did nottry to link with//the stubs from the MMixin2 class.{unsigned int idx;////Get the VTable of MMixin2//register.sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;////Get my own VTable//register.sub.-- mptr*myVTable=(.sub.-- mptr*).sub.-- vtbl.sub.-- 7MMixin2.sub.-- 11TMixedClass;////Save a pointer to the start of my VTable//.sub.-- mptr* base = myVTable;////Now, we skip the first entry, because it's blank//myVTable + = 1;vtbl + = 1;////The second entry (which happens to be the destructor) is a//pointer to one of the routine of TMixedClass, and not an//inherited routine. The .sub.-- mptr .o field of any method that//belongs to "this" class will have a negative number//indicating the offset that must be added to the parent class//pointer (in this case MMixin2) in order to get to the "real"//object. We save that value off in the second long of the//first.sub.-- mptr entry (which is always unused in the AT&T v2.1//format)//base->func = (ProcPtr)(long)myVTable->o;////Then, we skip entries that are our own, and copy any entries//that are inherited.//myVTable + = 2;vtbl + = 2;(*myVTable++).func = (*vtbl++).func;}/****************************/* This code is NOT Optional****************************/////Now, we initialize the last VTableRec. We pass (char*)0×0201 to//indicate to the SLM that this is a VTable corresponding to a//"virtual" parent class. One of the side effects of this is that//SLM will not allow you to cast a TMixedClass to an MMixin1//class (neither will C + +).//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall) (cat, vtRec, .sub.-- vtbl.sub.-- 7MMixin1.sub.-- 11TMixedClass, .sub.-- extbl.sub.-- TMixedClass, &.sub.-- CVRMMixin1, 28, (char*)0×0201);toCall = (ProcPtr)ctVTable[33];parentVtRec = (*(GetVTableRec)toCall)(cat, &.sub.-- CVRMMixin1, 1);if (parentVtRec = = 0) break;index + = 1;/************************/*This code is OPTIONAL************************/////Do all of the copying voodoo again/{unsigned int idx;register .sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;register .sub.-- mptr* myVTable = (.sub.-- mptr*).sub.-- vtbl.sub.-- 7MMixin1.sub.-- 11TMixedCla ss;.sub.-- mptr* base = myVTable;myVTable + = 1;vtbl + = 1;base->func = (ProcPtr)(long)myVTable->o;myVTable + = 1;vtbl + = 1;(*myVTable + +).func = (*vtbl + +).func;}/*****************************/* This code is NOT Optional*****************************/////Flag success//isOK = 1;} while (0);////If something went wrong, we have to call ReleaseVTableRec on any//parents where we already called GetVTableRec on. "index" tells us//how far we got, so we can back out.//Then we throw an exception.//if(isOK = = 0){toCall = (ProcPtr)ctVTable[26];switch(index){case 2: (*(ReleaseVTableRec)toCall)(catalog, &.sub.-- CVRMMixin2);case 1: (*(ReleaseVTableRec)toCall)(catalog, &.sub.-- CVRTStdDynamic);default: break;}Fail(-3120, 0);}}__________________________________________________________________________ Copyright Apple Computer 1991-1993 A couple of key points to realize about the code on the previous pages. The first is that it is not strictly necessary to dynamically copy methods that are inherited. If you do not, then the vtable will link with stubs to the inherited methods, which will work just fine, and be only slightly slower. It is also not strictly necessary to modify the object file to make inherited reference in the vtable point to -- pure-- virtual-- called. You could also still copy the pointers from your parent's vtable at runtime, and all that would happen is that the stubs become orphaned because no one uses them, so your library is a little bigger than it should be. However, what is important is that you somehow determine the offset of the subclass from the main class and store that in the .func field of the first vtable entry for each VTableRec (i.e. the 4 bytes starting at offset 4 in the vtable). SLM chose to determine this at runtime, using the knowledge that the .o field of any vtable method that has been overridden by the main class has this offset already stored in it. This was easier than trying to fully parse the class declarations (with all the attendant knowledge of padding rules, etc) to determine them--but whatever method works for a given environment can be used. The offset is always a negative number. Now, we get to the initialization code for the class TMixedClass2. ______________________________________extern ClientVTableRec.sub.-- CVRTMixedClass2;extern void.sub.-- ct.sub.-- 12TMixedClass2Fv(void);extern void.sub.-- dtq.sub.-- 12TMixedClass2Fv(void);extern void.sub.-- ctq.sub.-- 12TMixedClass2Fv(void);extern void.sub.-- ctq.sub.-- 12TMixedClass2Fi(void);ProcPtr.sub.-- extbl.sub.-- TMixedClass2[ ] =0,(ProcPtr).sub.-- ct.sub.-- 12TMixedClass2Fv,(ProcPtr).sub.-- dtq.sub.-- 12TMixedClass2Fv,(ProcPtr).sub.-- ctq.sub.-- 12TMixedClass2Fv,(ProcPtr).sub.-- ctq.sub.-- 12TMixedClass2Fi};#pragma segment A5Initvoid.sub.-- SVRTMixedClass2(void* vtRec, unsigned int vtRecSize){register ProcPtr toCall;void** cat = GetClassCatalog;void** ctVTable = *cat;void* parentVtRec;int isOK = 0;unsigned int index = 0;extern ProcPtr.sub.-- vtbl.sub.-- 12TMixedClass2[ ];extern ClientVTableRec.sub.-- CVRTMixedClass;extern ClientVTableRec.sub.-- CVRTMixedClass;extern ProcPtr.sub.-- vtbl.sub.-- 20MMixin2.sub.-- TMixedClass.sub.--12T-MixedClass2[ ];extern ClientVTableRec.sub.-- CVRTMixedClass;extern ClientVTableRec.sub.-- CVRMMixin2;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin1.sub.-- 12TMixedClass2[ ];extern ClientVTableRec.sub.-- CVRTMixedClass;extern ClientVTableRec.sub.-- CVRMMixin1;extern ProcPtr.sub.-- vtbl.sub.-- 7MMixin3.sub.-- 12TMixedClass2[ ];extern ClientVTableRec.sub.-- CVRMMixin3;extern ClientVTableRec.sub.-- CVRMMixin3;do{//// Initialize the primary vtable which has TMixedClass// as a parent.//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall)(cat, vtRec,.sub.-- vtbl.sub.-- 12TMixedClass2,.sub.-- extbl.sub.-- TMixedClass2, &.sub.-- CVRTMixedClass,36, (char*)0x0001);toCall = (ProcPtr)ctVTable[33];parentVtRec = (*(GetVTableRec)toCall)(cat,&.sub.-- CVRTMixedClass, 1);if (parentVtRec == 0) break;index += 1;/************************/* This code is OPTIONAL************************///// Do the copying stuff//{unsigned int idx;register.sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;register.sub.-- mptr* myVTable =(.sub.-- mptr*).sub.-- vtbl.sub.-- 12TMixedClass2;.sub.-- mptr* base = myVTable;myVTable += 1;vtbl += 1;base→func = (ProcPtr)(long)myVTable→o;myVTable += 1;vtbl += 1;for (idx = 0; idx < 7; ++idx){ (*myVTable++).func = (*vtbl++).func;}myVTable += 1;vtbl += 1;(*myVTable++).func = (*vtbl++).func;(*myVTable++).func = (*vtbl++).func;(*myVTable++).func = (*vtbl++).func;}/****************************/* This code is NOT Optional****************************///// Initialize a secondary vtable, which corresponds to the// methods from the MMixin2 class, but is inherited from// TMixedClass//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall)(cat, vtRec,.sub.-- vtbl.sub.-- 20MMixin2.sub.-- TMixedClass.sub.-- 12T-MixedClass2,.sub.-- extbl.sub.-- TMixedClass2, &.sub.-- CVRTMixed-Class, 36, (char*)0x0101);toCall = (ProcPtr)ctVTable[33];//// Get the Parent vtable rec//parentVtRec = (*(GetVTableRec)toCall)(cat,&.sub.-- CVRTMixedClass, 1);if (parentVtRec == 0) break;index += 1;/************************/* This code is OPTIONAL************************///// This vtable inherits from the vtable named// .sub.-- vtbl.sub.-- 7MMinx2.sub.-- 11TMixedClass, which was the// 2nd VTable that was stored for TMixedClass, so we// skip the parentVtRec pointer ahead to the 2nd Vtable// so that we can properly inherit the methods.//parentVtRec = (char*)parentVtRec + 1*vtRecSize;//// Do the copying stuff//{unsigned int idx;register.sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;register.sub.-- mptr* myVTable = (.sub.-- mptr*).sub.-- vtbl.sub.-- 20MMixin2.sub.-- TMixed- Class.sub.-- .sub.-- 12TMixedClass2;.sub.-- mptr* base = myVTable;myVTable += 1;vtbl += 1;base→func = (ProcPtr)(long)myVTable→o;myVTable += 1;vtbl += 1;(*myVTable++).func = (*vtbl++).func;}/****************************/* This code is NOT Optional****************************///// Initialize another secondary vtable. This one// corresponds to methods from MMixin1, but also// inherits from TMixedClass. It is flagged as a// virtual parent, since MMixin1 is declared as a// virtual parent in the "C++" header file.//toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall) (cat, vtRec,.sub.-- vtbl.sub.-- 7MMixin1.sub.-- 12TMixedClass2, .sub.-- extbl.sub.-- TMixedClass2, &.sub.-- CVRTMixedClass, 36, (char*)0x0201);toCall = (ProcPtr)ctVTable[33];parentVtRec = (*(GetVTableRec)toCall)(cat,&.sub.-- CVRTMixedClass, 1);if (parentVtRec == 0) break;index += 1;/************************/* This code is OPTIONAL************************///// This vtable inherits from the vtable named// .sub.-- vtbl.sub.-- 7MMixin1.sub.-- 11TMixedClass, which was the// 3rd VTable that was stored for TMixedClass, so we// skip the parentVtRec pointer ahead to the 3rd Vtable// so that we can properly inherit the methods.//parentVtRec = (char*)parentVtRec + 2*vtRecSize;{unsigned int idx;register.sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;register.sub.-- mptr* myVTable = (.sub.-- mptr*).sub.-- vtbl.sub.-- 7MMixin1.sub.-- 12TMixedClass2; ..sub.-- mptr* base = myVTable;myVTable += 1;vtbl += 1;base→func = (ProcPtr)(long)myVTable→o;myVTable += 2;vtbl += 2;(*myVTable++).func = (vtbl++).func;}/****************************/* This code is NOT Optional****************************/toCall = (ProcPtr)ctVTable[21];vtRec = (*(InitVTableRec)toCall) (cat, vtRec,.sub.-- vtbl.sub.-- 7MMixin3.sub.-- 12TMixedClass2, .sub.-- extbl.sub.-- TMixedClass2, &.sub.-- CVRMMixin3, 36, (char*)0x0101);toCall = (ProcPtr)ctVTable[33];parentVtRec = (*(GetVTableRec)toCall)(cat,&.sub.-- CVRMMixin3, 1);if (parentVtRec == 0) break;index += 1;/************************/* This code is OPTIONAL************************/{unsigned int idx;register.sub.-- mptr* vtbl = *(.sub.-- mptr**)parentVtRec;register.sub.-- mptr* myVTable = (.sub.-- mptr*).sub.-- vtbl.sub.-- 7MMixin3.sub.-- 12TMixed- Class2;.sub.-- mptr* base = myVTable;myVTable += 1;vtbl += 1;base→func = (ProcPtr)(long)myVTable→o;myVTable += 2;vtbl += 2;(*myVTable++).func = (*vtbl++).func;}/****************************/* This code is NOT Optional****************************/isOK = 1;} while (0);if (isOK == 0){toCall = (ProcPtr)ctVTable[26];switch(index){case 3: (*(ReleaseVTableRec)toCall)(catalog&.sub.-- CVRTMixedClass);case 2: (*(ReleaseVTableRec)toCall)(catalog&.sub.-- CVRTMixedClass);case 1: (*(ReleaseVTableRec)toCall)(catalog,&.sub.-- CVRTMixedClass);default: break;}Fail(-3120, 0);}______________________________________ Copyright Apple Computer 1991-1993 In order for the optional method copying to work, it is vital that the ordering of the vtables be the same no matter which C++ compiler generated the vtables. In the example above, we know that we must inherit from the 2nd and 3rd vtables in the list of vtables generated for TMixedClass (by the way, notice that you are passed the size of the VTableRec so that you can do the necessary calculations, and still allow us to change the size of a VTableRec later). If TMixedClass were generated in another library, it would be fatal for us to copy methods from those vtables if they were not generated in the same order that the SLM thinks they were generated in. The SLM uses the order that is generated by AT&T's CFront. The algorithm looks like this: 1) Create a list of all of the parent classes in the order they were declared. 2) For each parent that was not declared "virtual", examine all of your parents and their parents, etc. If anywhere in the hierarchy, you find this parent declared "virtual", remove this class from this list of your parents. 3) Now, create two lists of parents, a "hasA" list for virtual parents, and an "isA" list for non-virtual parents. Starting at the beginning of your original parent list, if the parent is "virtual", put it at the front of the "hasA" list, and if it is not, put it at the back of the "isA" list. 4) Now, take the "hasA" list and move each parent on the list to the end of the "isA" list. You now have a list of the parent classes in the proper order. Now, we can generate the names of the vtables from this list. 1) Create a new list for vtable names. 2) Create a vtable called -- vtbl-- ##ClassName, where ## is the length of the class name. It's parent class is the first class in your parent list, unless that first class is flagged "virtual", in which case, there is no parent. Add this vtable to the end of the vtable list. 3) If the first thing in your parent list is a virtual parent, then another vtable must be generated with the following name: .sub.-- vtbl.sub.-- ##<ParentName>.sub.-- ##<MyName> where <ParentName> is the name of this first parent in the list. Add this vtable to the end of the vtable list. 4) Get the list of vtables belonging to the parent class that is first in your list. Skip the first vtable in this parent's list. 5) For each remaining vtable in this parent's list, create a new vtable name: .sub.-- vtbl.sub.-- ##<ParentSubName>.sub.-- ##<MyName>, where the <ParentSubName> is derived from the name of the parent VTable as follows: 1) Strip the -- vtbl-- from the front of the vtable name 2) Remove the numbers from the end part of the vtable name 3) Change the numbers at the front of the vtable name so that they correspond to the length of the new string. For example, -- vtbl-- 7MMixin1-- 11TMixedClass becomes -- 20MMixin1-- TMixedClass. 6) If this new vtable name is not already in your list, append it to the end of the list. 7) For each remaining parent in your parent list, do steps 5 and 6 (do not skip the first vtable for the remaining parents). At this point, you have a list of vtables for your class that is exactly the same list that the SLM LibraryBuilder tool will generate. Of course, this algorithm must be suitably modified for compilers that generate vtables with different naming conventions. The following is the initialization code for the library that is created. ______________________________________#pragma segment A5Initvoid* .sub.-- InitVTableRecords(void)register ProcPtr toCall;void* vtRec;void* savedRec;void** catalog = GetClassCatalog;void** ctVTable = *catalog;toCall = (ProcPtr)ctVTable[19];savedRec = (*(GetVTableMemory)toCall)(catalog, 10);toCall = (ProcPtr)ctVTable[20];VtRec = savedRec;//// Initialize the MMixin1 class//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec,.sub.-- SVRMMixin1, &.sub.-- CVRMMixin1);//// Initialize the MMixin2 class//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec,.sub.-- SVRMMixin2, &.sub.-- CVRMMixin2);//// Initialize the MMixin3 class//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec,.sub.-- SVRMMixin3, &.sub.-- CVRMMixin3);//// Initialize the TMixedClass class - primary vtable//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec,.sub.-- SVRTMixedClass, &.sub.-- CVRTMixedClass);//// Initialize the TMixedClass class - secondary vtable// corresponding to a parent class of MMixin2//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec, 0, &.sub.-- CVRMMixin2);//// Initialize the TMixedClass class - secondary vtable// corresponding to a parent class of MMixin1//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec, 0, &.sub.-- CVRMMixin1);//// Initialize the TMixedClass2 class - primary vtable//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec,.sub.-- SVRTMixedClass2, &.sub.-- CVRTMixedClass2);//// Initialize the TMixedClass2 class - secondary vtable// corresponding to a parent class of MMixin2//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec, 0, &.sub.-- CVRMMixin2);//// Initialize the TMixedClass2 class - secondary vtable// corresponding to a parent class of MMixin1//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec, 0, &.sub.-- CVRMMixin1);//// Initialize the TMixedClass2 class - secondary vtable// corresponding to a parent class of MMixin3//vtRec = (*(Init1VTableRec)toCall) (catalog, vtRec, 0, &.sub.-- CVRMMixin3);return savedRec;}______________________________________ Copyright Apple Computer 1991-1993 Notice that all of the secondary vtables have a 0 for the pointer to the initialization function. The SLM will assume that the initialization function for the first prior VTableRec which has one will take care of initialization for these VTableRecs. In addition, for all of the secondary vtables, the ClientVTableRec is not the ClientVTableRec of the class itself, but for the parent class which corresponds to the vtable which will be stored in the VTableRec, which is the class that we can cast the object to. For instance, if we look at the second call to Init1VTableRec for TMixedClass2, &-- CVRMMixin2 is passed to the SLM. If we look at the SVR function for TMixedClass2, we find that -- vtbl-- 20MMixin2-- TMixedClass-- 12TMixedClass2 is the vtable that is stored into this VTableRec. This vtable is the vtable that will be used if a TMixedClass2 is cast to an MMixin2 object. However, notice that we passed &-- CVRTMixedClass as the ClientVTableRec of the parent. This is because this vtable, while it belongs to the part of TMixedClass2 that is an MMixin2 object, actually is inherited from the -- vtbl-- 7MMixin2-- 11TMixedClass vtable from TMixedClass. The Shared Library An SLM Shared Library consists of a number of resources, bound together by a resource id and a code resource type (see FIG. 8). Any number of these libraries can be placed in a single file, as long as the resource ids and code resource types used are unique. All libraries have a `libr` resource with a resource id. SLM will scan for all `libr` resources within a library file in order to catalog all of the libraries within the file. Within the `libr` resource, the code resource type that belongs to that library is defined. For library files with multiple libraries, these types are commonly `cd01`, `cd02`, etc. In addition, each library may have a `libi` resource which has the same resource id as the corresponding `libr` resource. This `libi` resource contains information about all of the shared classes and function sets which are needed by the library. The tool CreateLibraryLoadRsrc creates `libi` resources. The `libr` resource A library is installed when the SLM is initialized (normally at boot time) or when it is drag-installed into a library folder. This is normally the "Extensions" folder, but additional folders may be registered by applications as library folders. When a library is installed, its `libr` resource is read in by the library manager. The library manager keeps a catalog of Class IDs and FunctionSet IDs and the data for each which it gets from the `libr` resource. Note that a library file may have more than one library in it, in which case it has multiple `libr` resources. The `libr` resource format (note: the array LibrLine will not have more than one entry, multiple `libr` resources will be present instead): ______________________________________type `libr` {array LibrLine { /* information for a library */cstring; /* Library id */align word;string[4]; /* code resource type */hex byte; /* libr template major version */hex byte; /* libr template minor version */hex byte; /* Major version */hex byte; /* Minor revision */hex byte; /* Development stage */hex byte; /* Release within stage */integer; /* reserved in v1 (0) */integer; /* PerClientDataSize in v1 (0 = */ /* default) */longint; /* HeapSize in v1 (0 = default) */longintpreload = 0x01,clientPool = 0x02,nosegunload = 0x04,loaddeps = 0x08,forcedeps = 0x10,loadself = 0x20,defaultHeap = 0x0000,tempHeap = 0x0100,sysHeap = 0x0200,appHeap = 0x0300,holdMemory = 0x0400,notSystem7 = 0x10000,notSystem6 = 0x20000,notVMOn = 0x40000,notVMOff = 0x80000,has FPU = 0x100000,hasNoFPU = 0x200000,not68000 = 0x400000,not68020 = 0x800000,not68030 = 0x1000000,not68040 = 0x2000000; /* flags for the code */ /* resource */integer = $$Countof(ClassIDs); /* # class IDs */array ClassIDs /* array of classids the library */ /* implements */longint preload = 0x01, newobject = 0x02, isFunctionSet = 0x04; /* flags for the class */integer; /* current version */integer; /* minimum version */cstring; /* the class id */ /* string */align WORD;integer = $$CountOf(ParentIDs);array ParentIDs { cstring;}; /* the parent class id */ /* string */align WORD;};};};______________________________________ Copyright Apple Computer 1991-1993 ExampleLibrary `libr` resource ______________________________________#define SLMType `code`#define SLMID 0resource `libr` (SLMID) {"appl$ExampleLibrary,1.1","code",0x01, 0x10, 0x01, 0x10, 0x60, 0x04, 0, 0, 2,{10, 0x0110, 0x0110, "appl:exam$TExampleClass",{ "!$dyna"};4, 0x0110, 0x0110, "appl:exam$ExampleFSet", { };}}};______________________________________ Copyright Apple Computer 1991-1993 Multiple-inheritance example `libr` resource ______________________________________#define SLMType cod8#define SLMID 8resource `libr` (SLMID) {"quin:test$MITest1,1.1","cod8",0x01, 0x10, 0x01, 0x10, 0x20, 0x01, 0, 0, 6,{26, 0x0110, 0x0110, "quin:test$MMixin1",{ };26, 0x0110, 0x0110, "quin:test$MMixin2",{ };26, 0x0110, 0x0110, "quin:test$MMixin3",{ };26, 0x0110, 0x0110, "quin:test$TMixedClass",{ "*quin:test$MMixin1", "quin:test$MMixin2", "!$sdyn"};26, 0x0110, 0x0110, "quin:test$TMixedClass2",{ "*quin:test$MMixin1", "quin:test$MMixin3", "quin:test$TMixedClass"};}}};______________________________________ Copyright Apple Computer 1991-1993 Notice that a star character `*` is put in front of the ClassID of any parent class which was defined as virtual in the class declaration. There are several fields in the `libr` resource which should be called to your attention. 1) The flags for the code resource. This field is a bitmap that defines the attributes of the library, as well as bits that can limit when the library will load. See the later section on the LibraryBuilder tool for the meaning of each of the bits. 2) The "heap" type is encoded in the lower 2 bits of the second byte of the flags. The next bit indicates whether the memory for the library should be "held" when Virtual Memory is in use. 3) The longint specifying heapsize. This allows you to specify the size of the heap you want the library to load in. Normally, SLM creates a heap big enough to hold all of the code in the library. If you are planning on loading and unloading segments manually, you might want to create a smaller heap. 4) The integer specify the per-client data size. If your library requires per-client data, SLM will manage it for you, but it needs to know the size of data required. You can then use the GetClientData function to retrieve a structure of this size for each client that you have. 5) The `libr` template major and minor version numbers. These should always be set to 0×01 for the major version and 0×10 for the minor version. This corresponds to the `libr` template definition for SLM. 6) The version number of the library is encoded in the library id. The `libi` Resource Libraries and clients may have `libi` resources. As has been previously indicated, the resource id of the `libi` resource for a library must match the resource id of the `libr` resource. For applications and clients, the `libi` resource must have an id of 0. The format of the `libi` resource is: ______________________________________type `libi`longint = 0; // version numberlongint = 0; // reservedinteger = $$CountOf(A5Offsets);array A5Offsets {integer;};};______________________________________ Copyright Apple Computer 1991-1993 Currently the `libi` resource is nothing more than A5 offsets to the ClientVTableRecs that are used by the library or client. This allows us to create the `libi` resource by scanning the map file created by linking the shared library or client for references to symbols that look like -- CVRxxxxxxxx, and simply storing the A5 offset indicated into the resource. Library Initialization When a library is loaded the jump table (`code` 0) and initialization (`code` 1) resources are loaded and the entry point is called. The entry point routine initializes the static data for the library and then calls the init v-table function. The -- InitVTableRecords function sets up the v-tables, vtable records (VTableRec), and binds them to the library manager internal catalog entry (TClass) for each function set and class in the library. This is the entry point routine in the library. The LibraryBuilder link command (generated in SharedLibtemp.bat) uses the option "-m DynamicCodeEntry" which makes this the third jump table entry (in model far the first two entries are "dummy" entries). ______________________________________extern "C" void.sub.-- InitProc( );extern "C" void.sub.-- CleanupProc( );extern TLibraryManager* .sub.-- gLibraryManager;#pragma segment A5Init/************************************************* ********** DynamicCodeEntry************************************************** *******/extern "C" long DynamicCodeEntry(CodeEntrySelector selector, long param1, long param2)VTableRecPtr myVTableRecArray;switch (selector){case kInitWorld:// Our own modified.sub.-- DataInit routine relocates and// unpacks initialized data and relocates A5 relative// addresses..sub.-- InitData((Ptr) param1, (Ptr) param2);return 0;case kSetLibraryManager:.sub.-- gLibraryManager = (TLibraryManager*) param1;return 0;case kInitLibraryProc:.sub.-- InitProc( );return (long).sub.-- CleanupProc;case kSetupVTables:myVTableRecArray = .sub.-- InitVTableRecords( );return (long) myVTableRecArray;}}______________________________________ Copyright Apple Computer 1991-1993 The Export Definition File The "export" file defines the classes and functions to be exported. This file has 3 major components: A Library declaration, Class declarations and FunctionSet declarations. The full syntax for these declarations is described in the next section. In addition, "C" or "C++" style comments are allowed, as well as #include statements. The LibraryBuilder application is able to scan these include files to learn the definitions of #defined symbols that may make up parts of the export definitions. A .exp file consists of comments, #include directives, #define directives, a Library declaration, plus Class and/or FunctionSet declarations. The Library Declaration The library declaration defines the ID of the shared library and the version. Additional parameters are available to configure the library. ______________________________________Libraryinitproc = <ProcName>; optionalcleanupProc = <ProcName>; optionalflags = <FlagOptions>; optionalid = <LibraryIDString>; requiredversion = <LibraryVersion>; requiredmemory = <MemoryOption>; optionalheap = <HeapType>; optionalclientdata = <ClientData Option>; optional};______________________________________ Copyright Apple Computer 1991-1993 Element Descriptions initproc This declaration allows you to specify the name of a "C" routine (which takes no parameters and returns no value) which will be called immediately after loading and configuring the library. This routine may be in the A5Init segment, so that it is unloaded from memory after the library is fully loaded. cleanupProc This declaration allows you to specify the name of a "C" routine (which takes no parameters and returns no value) which will be called just before a library is unloaded from memory. This routine must not be in the A5Init segment since it cannot be reloaded. flags=noSegUnload || !segUnload This flag indicates that the segments of the shared library will not be unloaded by the client. The SLM resolves all jump table references to code addresses at library load time and removes the jump table from memory. This is the default setting. flags=segUnload || !NoSegUnload This flag indicates that the segments of the shared library may be unloaded by the client. The SLM will allow segments to be loaded and unloaded in the shared library, and will keep the jump table in memory. flags=preload This flag indicates that all segments of the shared library should be loaded at library load time. It does not guarantee that the segments will not be unloaded so the jump table must be kept in memory and intersegment references are left pointing to the jump table. "flags=!preload" is also supported, but is the default case. flags=loaddeps This flag indicates that the SLM should load all dependent classes whenever this library is loaded(based on the information in the `libr` resource created during the build process). Using this flag will guarantee that all libraries that your library depends on exist. It does not guarantee that there is enough memory available to load them. flags=forcedeps This flag acts just like the "loaddeps" flag, but it forces the dependent libraries to be loaded into memory. flags=stayloaded This flag forces your library to stay loaded. It requires a call to UnloadLibraries from within your library to allow your library to unload. It is equivalent to calling LoadLibraries(true, true) within your InitProc. It also causes all of your dependencies to be loaded into memory (like the "forcedeps" flags). flags=system6 || !system7 This indicates that your library should not be registered if it is installed on a system 7.x-based Macintosh. No clients will be able to see any of the classes or function sets in your library. This flag is useful if you have 2 different versions of your library--one for System 6.x and one for System 7.x. flags=system7 || !system6 This indicates that your library should not be registered if it is installed on a system 6.x-based Macintosh. No clients will be able to see any of the classes or function sets in your library. This flag is useful if you have 2 different versions of your library--one for System 6.x and one for System 7.x. flags=vmOn || !vmOff This indicates that your library should not be registered if it is installed on a Macintosh which is running with Virtual Memory (VM) turned on. No clients will be able to see any of the classes or function sets in your library. This flag is useful if you have 2 different versions of your library--one for Virtual Memory on and one for Virtual Memory off. flags=vmOff || !vmOn This indicates that your library should not be registered if it is installed on a Macintosh with Virtual Memory (VM) turned off. No clients will be able to see any of the classes or function sets in your library. This flag is useful if you have 2 different versions of your library--one for Virtual Memory on and one for Virtual Memory off. flags=fpuPresent || !fpuNotPresent This indicates that your library should not be registered if it is installed on a Macintosh without=fpuNotPresent || !fpuPresent This indicates that your library should not be registered if it is installed on a Macintosh with=mc68000 || mc68020 || mc68030 || mc68040 This indicates that your library should only be registered if it is installed on a Macintosh with the specified processors. You may specify more than one processor. For example, "flags=mc68000, mc68020" will cause your library to be registered only on 68000 or 68020 processors. flags=!mc68000 || !mc68020 || !mc68030 || !mc68040 This indicates that your library should not be registered if it is installed on a Macintosh that is not one of the specified processors. You may specify more than one processor. For example, "flags=!mc68000, !mc68020" will cause your library to be registered only on Macintoshes with a 68030 or higher processor. It is an error to mix not ("!") terms with non-not terms (i.e. flags=mc68000, !mc68020). id= This declaration defines the ID of the library. A library ID is normally in the form "xxxx:yyyy$Name". This ID string is a quoted string, but it may include #defined constants as part of it's definition as long as you #include the files that contain the #define declarations that resolve the constants. version= This declaration defines the version of the library. The version number is in the standard Apple version number form: #.#[.#] followed by either nothing or [dabf]# to indicate the release status. For example 1.0b2 or 1.1.2d5. This may be a #defined symbol. memory=client This declaration indicates that any "new" operations done in the library should use the client's pool. This is the default if it is not specified. It is equivalent to the useclientpool options in earlier versions of SLM. memory=local This declaration indicates that any "new" operations done in the library should use the local pool. heap=default || temp || system || application [,hold][,#] This tells the SLM where you want your library to be loaded into memory. Normally, you should not specify this attribute unless you have a very good reason. However, if your library must run under virtual memory and cannot move in memory (for instance, a networking driver), you can specify the ",hold" attribute to inform the SLM that you require the memory that your library is loaded into to be "held" under Virtual Memory. You can also optionally specify the size of the heap that you want your library to load into (this option only makes sense for default or temp). clientData=<StructureName> || # This tells the SLM that you require per-client static data. You can specify either a number of bytes or the name of a structure. Whenever you call GetClientData, you will then be returned a structure of the specified size. The first time the structure is created for a given client, it will be zeroed. After the first time, you will get back the structure corresponding to your current client. If you specify a structure name, the object file must have the type information available to determine the size of the structure, or an error will be generated. The Class Declaration A full class declarations is: ______________________________________Class <ClassName>version = <ClassVersion>;flags = preload, newobject, noVirtualExports, noMethodExports, noExports;exports = <ListOfFunctionNames>;dontExport = <ListOfFunctionNames>private = * | <ListOfFunctionNames>};______________________________________ Copyright Apple Computer 1991-1993 All fields except the <ClassName> are optional. The minimalist class declaration is just: Class <ClassName>; The id of the class must be #defined as a constant of the form k<ClassName>ID. It is optional (but a very good idea) for your class ID to terminate with a "," followed by the version number of the class, especially if your class can be instantiated using the SLM NewObject function. This will keep your clients from inadvertently getting a wrong version of your class. Element Descriptions <ClassName> This is the name of the class that you want to export. version= This declaration defines the version of the class. The version number is in the standard Apple version number form: #.#[.#]. The version number may not have the extra release information (like b2) on it. However, the version number may be 2 version numbers separated either by 3 dots (...) or an ellipsis (option-;) character. This indicates the minimum version number of the class that this class is backwards-compatible with, and the current version number of the class. If you do not specify a version number, if the ClassID of the class has a version number in it, that will be used. Otherwise, the version number specified in the "Library" declaration will be assumed. This may be a #defined symbol. flags=newobject This flag specifies that clients are allowed to create the class by ClassID using the NewObject routine. A fatal error will occur at build time if this flag is set, but you do not have a default constructor for your class (a default constructor is one which takes no arguments), your class is abstract (has a "pure-virtual" method), or the class size cannot be determined from symbol information in the object file. flags=preload This flag specifies that an instance of the class should be created whenever the system boots up. A fatal error will occur at build time if this flag is set, but you do not have a default constructor for your class (a default constructor is one which takes no arguments), or your class is abstract (has a "pure-virtual" method). If this flag is set, the newobject flag is automatically set. flags=noExports This flag specifies that no methods of this class are to be exported. NewObject is the only way that a client can use your class if this flag is set and you do not export constructors in the exports=section, and they can only call virtual functions in the class, unless you explicitly export methods (see exports=below). flags=noVirtualExports This flag specifies that no virtual methods of this class are to be exported. For the purposes of consistency in the SLM, the destructor of a class is NOT considered a virtual method, even if it was defined that way. You can explicitly export some virtual functions using the exports=clause below. flags=noMethodExports This flag specifies that no non-virtual methods of this class are to be exported. This includes constructors and the destructor for the class. NewObject is the only way that a client can use your class if this flag is set and you do not export constructors in the exports=section exports= This declares a comma-separated list of methods that you want to export from the class. It is normally used to override the "noExports" "noMethodExports", or "noVirtualExports" flags for individual methods. You only need to specify the function name, but if it is a pascal function, you need to put the keyword "pascal" in front of the function name. Like C++, the SLM considers ALL variants of a member function as the same function, and will export them all. To export operators, use the C++ syntax (e.g. operator+=). To export constructors, use the name of the class, and to export destructors, use ˜<Name of Class>. dontexport= This declares a comma-separated list of functions that you do not want to export from the class. You only need to specify the function name, but if it is a pascal function, you need to put the keyword "pascal" in front of the function name. Like C++, the SLM considers ALL variants of a member function as the same function, and will not export any of them. You may not use both exports=and dontexport=for the same class. private= This declares a comma-separated list of methods that you want to export from the class privately. Any methods specified in this list will be exported, but will go into a separate client object file (defined by the -privateNear and/or -privateFar command-line switches to LibraryBuilder). private=* This declares that all methods that can be exported should be exported privately. If you have set noMethodExports, then all virtual methods will be exported privately that are not either explicitly exported publicly by the exports=clause or that are specifically excluded from being exported by a dontexport=clause. If you have set noVirtualExports, then all non-virtual methods will be exported privately that are not either explicitly exported publicly by the exports=clause or that are specifically excluded from being exported by a dontexport=clause. If you have neither flag set, than all methods of the class will be exported privately that are not either explicitly exported publicly by the exports=clause or that are specifically excluded from being exported by a dontexport=clause. It is an error to use this switch if the noExports flag is set. The FunctionSet Declaration ______________________________________FunctionSet <FunctionSetName>id = <ClassID>; requiredinterfaceID = <ClassID>; optionalversion = <ClassVersion>; optionalexports = <ListOfFunctionNames>; optionaldontexport = <ListOfFunctionNames>; optionalprivate = * | <ListOfFunctionNames>; optional};______________________________________ Copyright Apple Computer 1991-1993 If a function set does not have an exports=clause and it does not have a dontexport=clause, all global functions (that are not methods of a C++ class) will be exported (subject to any constraints set by the private=clause--see below). If there are multiple function sets in a library, only one of them can be missing both of these clauses. The function set that is missing both clauses will export all of the global functions (that are not methods of a C++ class) that are not exported by any of the other function sets in the library. Element Descriptions <FunctionSetName> This provides a unique name for your function set when linking. id= This declaration defines the classID of the function set. A classID is normally in the form "xxxx:yyyy$SomeName". This ID string is as a quoted string, but it may include #defined constants as part of it's definition as long as you #include the files that contain the #define declarations that resolve the constants. If you do not include an "id=" declaration, a #define found in the included files whose name matches k<functionSetName>ID will be assumed to be the classID of the class. An error will occur at build time if the classID of the class cannot be determined. interfaceID= This declaration defines an interface ID for the function set. It has the same format as all other ClassIDs. By defining an interface ClassID, you can use the SLM's FunctionSetInfo methods to find all Function Sets which have the same interface ID. Presumably, all function sets with the same interface ID export the same functionality, either by name or by index. This gives you a kind of object-oriented ability for ordinary functions. version= This declaration defines the version of the function set. The version number is in the standard Apple version number form: #.#[.#]. The version number may not have the extra release information (like b2) on it. However, the version number may be 2 version numbers separated either by 3 dots (...) or an ellipsis character (option-;). This indicates the minimum version number of the function set that this function set is backwards-compatible with, and the current version number of the function set. Nothing is done with this information in version 1.0 of SLM, but future versions will take advantage of this information. If you do not specify a version number, the version number specified in the "Library" declaration will be assumed. This may be a #defined symbol. exports= This declares a comma-separated list of functions that you want to export in this function set. You only need to specify the function name, but if it is a pascal function, you need to put the keyword "pascal" in front of the function name. Like C++, the SLM considers ALL variants of a function as the same function, and will export them all (unless you used the -c switch on the BuildSharedLibrary command line). If you are exporting a C++ class method, you should precede the method name with <ClassName>::. For a C++ class method, the -c switch is ignored and all variants of the method are exported. To export C++ operator overloads, use the C++ syntax (e.g. operator+=). To export constructors, use <ClassName>::<ClassName>, and to export destructors, use <ClassName>::˜<ClassName>. Some special keywords are available in this clause. They are: 1) static <ClassName>--all static methods of the specified class will be exported. 2) class <ClassName>--all non-static methods of the specified class will be exported. 3) extern <FunctionName>--the specified function will be exported by name. 4) pascal <FunctionName>--the specified function is a pascal function. The "pascal" keyword can be combined with the "extern" keyword, if necessary. dontexport= This declares a comma-separated list of functions that you do not want to export in this function set. It has the same syntax as the "exports=" clause, except that the "static", "class" and "extern" keywords are not valid. private= This declares a comma-separated list of methods that you want to export from the function set privately. Any methods specified in this list will be exported, but will go into a separate client object file (defined by the -privateNear and/or -privateFar command-line switches to LibraryBuilder). If you have not defined an exports= or dontExport=clause, then all other functions will be exported publicly. private=* This declares that all functions that can be exported should be exported privately. If you have not defined an exports=or dontExport=clause, then all of the functions will be exported privately. If you have an exports=clause, then the functions declared there will be exported publicly, and all others will be exported privately. If you have a dontExport=clause, then the functions declared there will not be exported at all, and all others will be exported privately. If you have both clauses, those in the dontExport=clause will not be exported, those in the exports=claus will be exported publicly, and all others will be exported privately. Conclusion Accordingly, the present invention provides a system extension that brings dynamic linking and dynamic loading to a computer system architecture through the use of shared libraries. The shared library manager allows you to turn almost any code into a shared library without modifying your sources and without writing any additional code. You write the code that goes into the shared library in your chosen source language or assembly, call it from any of those languages as well. In addition, the system allows for calling virtual functions in a C++ class in a shared library without additional overhead. This system is especially effective for C++ classes. Object oriented languages like C++ give programmers the power of modular design with innate code reuse and offer better ways of putting together programs. However, the concept of code reuse and modularity is not fully realized in many implementations today. A general mechanism for runtime sharing of classes allows developers to take more advantage of the object oriented benefits of C++. True code sharing and reuse becomes possible. Improvements in implementations of classes are immediately usable by all applications, without rebuilding the application that uses the class. It is possible to create a class that derives from a base class which is in a shared library leading to yet another term, dynamic inheritance. In fact, the subclass itself can be in a shared library. Thus, the present invention is particularly appropriate for applications that want to allow modules to be added on at a later time. If the modules are placed in shared libraries, the application can check to see which modules exist, and choose which modules to use. In addition, tools such as spellcheckers are well suited for shared libraries according to the present invention. If all spellcheckers share a common interface, the application can choose which spellchecker to use. Also, the spellchecker can be used by more than one application at the same time. In general, any code that you want to share between one or more applications is a candidate for a shared library. This is especially useful for large software companies that sell multiple applications that contain some common code. The shared library manager makes it easy to share this code. For example, a word processor might want to take advantage of some of the graphics capabilities of a graphics program. Such abilities might be placed in shared libraries and dynamically linked according to the present invention. Some key features of the shared library manager include the following. Dynamic Linking and Loading Shared libraries are loaded and linked with clients at run time (and unloaded when no clients are using them). Loading occurs on-demand, not at launch time. But the user may force libraries to load at launch time to be sure of their availability. Dynamic Installation Shared library files may be dragged in and out of the Extensions folder without having to reboot to use the shared libraries. Usage Verification An application can verify that a set of classes or functions required for proper operation of the application is available. The shared libraries required can then be loaded, or loading can be delayed until the code is needed. Library Preloading To guarantee that a library will be available when it is needed, a shared library may be set to preload at boot time, or it may be explicitly loaded by a client after boot time but before any classes or functions are actually used. Performance The Shared Library Manager provides high performance dynamic loading and linking of function sets and classes, high performance construction, use, and destruction of objects. In C++, a method or virtual function is called by a single indirection through a table of pointers to the function implementations (called the v-table or virtual function table). A dynamically linked function or procedure is "snap-linked", which means that the binding overhead occurs only once, after which the target address is cached (the link is "snapped") in the client. This calling mechanism is much more efficient than the Macintosh Device Manager dispatching which is used by device drivers, or the A-trap plus selector code mechanism used by stand-alone code resources on the Macintosh (such as CDEFs, WDEFs, and the Communications Toolbox). Other mechanisms such as IPC messages are also inefficient by comparison and not well suited for time critical use (as required by high performance networking protocols or other high performance device drivers). Dynamic Inheritance A class may inherit from a class which is not in the same shared library file. This also means that a developer can create a class that inherits from another developer's class. The new subclass can either be in another shared library or in the application code. Dynamic Class Creation When creating an object, you can dynamically create it by name. This allows an application to create objects of classes it has never seen before. Generally, these objects are a subclass of a class the application does know about. Class Verification A client can verify at run-time that a given class is derived from a particular base class or that an object is of a particular base class. Dynamic class creation and class verification are used. A computer program listing appendix under 37 C.F.R. §1.96 follows, consisting of 54 pages. ##SPC1##
https://patents.google.com/patent/US5615400?oq=6%2C460%2C050
CC-MAIN-2018-26
refinedweb
24,774
53.41
One of the cool features of the .NET Core is that the namespaces you reference in the project are the only ones that are contained in the DLL your project renders. I was reading that when you initially create a project in Visual Studio, that numerous packages are included by default and was wondering, because the list of default packages was long, see Figure 1, whether all of them would be included, which I was thinking is against the design principle. Figure 1, .NET Core 2.0 modular, NuGet packages I wanted to find out which packages were actually deployed with my DLL. I used a tool called Just Decompile to look at this. I saw that only a small set of the packages were included, Figure 2 and this was what I expected and found. Figure 2, .NET Core 2.0 modular, NuGet packages As I mentioned here (Target .NET Core 2.0 and .NET Standard 2.0), see Figure 3, that the packages are installed into the C:\Program Files\dotnet directory. Figure 3, .NET Core 2.0 modular, NuGet packages As you see in Figure 2, all the package references have a yellow triangle next to it except System.Console. Once I navigated to C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.0 and selected the, for example, System.Diagnostics.Debug package / module assembly. Figure 4, I was then able to work with it. This is good stuff. Figure 4, .NET Core 2.0 modular, NuGet packages
https://www.thebestcsharpprogrammerintheworld.com/2018/04/17/net-core-2-0-modular-and-nuget-packages/
CC-MAIN-2021-04
refinedweb
252
68.77
We have had a REST API for a long time. It’s useful to create new objects (eg. service, node, etc.), to get their current state, modify them, or terminate them. We have been using it for our dashboard, our CLI, and our images. Also, a lot of users are using this REST API to create awesome things on top of Tutum. Today we are releasing a new element in our API called Stream API. We hope this addition will allow you to build even more powerful applications. A Little History For some time, our dashboard has had an advantage over the apps that can be developed on top of Tutum. We had a socket.io server that sent notifications to every user when one of their objects was created, updated or deleted. For example, every time a new service is created or a stack is redeployed, notifications for the different changes happening are sent out. This allows Tutum to give real-time updates to the Dashboard UI, making it responsive and more interactive. We started to see the need to have this same power in a couple of our images. One such example is our HAProxy load balancer. The load balancer polls every 30 seconds to check if the containers in a service have changed. If it has changed, the configuration of the proxy needs to be updated to handle these new containers. This generates a lot of unnecessary requests to our API as most services do not change every 30 seconds. Furthermore, when services do change, it can take up to 29 seconds for HAProxy to update and reflect these changes in its configuration. This past month we have been changing a lot of our code to allow our images and every one of our users to be able to listen to these events by creating a new Stream API. We exchanged socket.io with a standard implementation of web sockets. We did this to ensure cross compatibility with your language of choice. Now our users are able to track in real time every container, service, node, node cluster and action in their Tutum account without needing to poll. How to Use it The first step is to get a websocket client; for python we recommend this one and for nodejs we recommend this one. Please let us know your suggestions for other languages in the comments section! Once you have the library installed, you need to open a connection to the server and set the handlers for open connection, error, close connection and every message. token = 'apikey'; username = 'username'; ws = websocket.WebSocketApp('wss://stream.tutum.co/v1/events?token={}&user={}'.format(token, username), on_message = on_message, on_error = on_error, on_close = on_close, on_open = on_open) The most important one is, arguably, the on_message one, that’s where most of the logic of your code should revolve around. In this example, we will parse the message into a dictionary and then check if it’s an auth completed message that occurred after the connection was complete. Then we check when the socket is ready to receive messages. After we print the type of object that was modified, all following messages will contain the new state of the object. The resource url can be requested to obtain any information from the object. def on_message(ws, message): msg_as_JSON = json.loads(message) type = msg_as_JSON.get("type") if type: if type == "auth": print("Auth completed") else: print("{}:{}:{}".format(type, msg_as_JSON.get("state"), msg_as_JSON.get("resource_uri"))) This handler will generate a log like this one below: Auth completed:Stopping:/api/v1/container/f1e1042f-de66-402f-8f64-8ba40bee5c2c/:Stopped:/api/v1/container/f1e1042f-de66-402f-8f64-8ba40bee5c2c/ action:Success:/api/v1/action/d1c3e851-e025-433f-a29f-a44aa482fb14/ action:Pending:/api/v1/action/d770dbad-0bca-4a56-818f-fe56b95e3cfb/ action:Pending:/api/v1/action/d770dbad-0bca-4a56-818f-fe56b95e3cfb/ action:In progress:/api/v1/action/d770dbad-0bca-4a56-818f-fe56b95e3cfb/ service:Stopped:/api/v1/service/ba4d5bec-8bc4-443e-b032-64f39cb5a9d8/ For examples using python and node js, check out this gist. Using the Stream API, doors open up to some really simple but super interesting use cases. A custom notifications system, such as email updates every time a service is stopped is an obvious one. But this can also be leveraged for more complex scenarios dealing with infrastructure and/or container auto-scaling, custom application triggers, slack notifications, if-this-then-that systems, etc. Your creativity is the only limit, so let us know what are you building with the Stream API and any way in which we can make it even simpler, more flexible, and ultimately more useful to you. What’s Next? In the coming weeks, the Stream API will handle logs per container, docker events per node, and docker exec to a particular container. We hope that you are as excited as we are about this new feature, and how it will evolve over the upcoming weeks. Please do not hesitate to send your feedback! Thank you. […] Continue reading here. […] Reblogged this on Dinesh Ram Kali.. […] recently introduced its new Tutum Stream API. This is a great new feature that allows you to use WebSockets to monitor Events from your Tutum […] […] recently introduced its new Tutum Stream API. This is a great new feature that allows you to use WebSockets to monitor Events from your Tutum […] […] open-source container load-balancer that has been developed to take full advantage of Tutum’s Stream API – dynamically updating as containers are […]
https://blog.tutum.co/2015/04/07/presenting-tutum-stream-api/
CC-MAIN-2017-13
refinedweb
912
54.63
As the title explains I am hoping to be able to set an Instance Variable using data produced by a calculation contained within a Method. I produced an example code to demonstrate what I'm trying to do, as well as a solution I pieced together after scouring the internet. Obviously though, I failed to successfully replicate what others had done. Thanks in advance for your help. Sidenote: I have no doubt that this may well be closed as a duplicate, but since I have thusfar failed (after thorough searching) to find anything sufficiently informative to allow me to understand this process. public class Main { private int Max; public int getMax() { return Max; } public static void main(String[] args) { Main Max = new Main(); { Max.printMax(); } } public void myMethod() { int Lee = 6; this.Max = Lee; } public void printMax() { Main max = new Main(); int variable = max.getMax(); System.out.println(variable); } } I think you have some misunderstandings of how instances of a class work. I recommend you to learn the basics of OOP first. Anyway, although you didn't tell me what should be the expected result, I guess that you want to print 6. So here's the solution: private int Max; public int getMax() { return Max; } public static void main(String[] args) { Main Max = new Main(); Max.printMax(); } public void myMethod() { int Lee = 6; this.Max = Lee; } public void printMax() { this.myMethod(); int variable = this.getMax(); System.out.println(variable); } Let me explain what I have changed. mainmethod, there is no need for the {}after new Main(), so I deleted them. printMax, there is no need to create a Maininstance again because thisalready is one. 6was because the variable maxwas never changed. Why? Because you just declared a method that changes max, but that method ( myMethod) is not called! So I just added a line to call myMethod.
https://codedump.io/share/6Q0MBI8FZccl/1/set-instance-variable-from-method-using-java
CC-MAIN-2016-50
refinedweb
308
74.39
Improve this doc This plugin might require a paid license, or might take a share of your app's earnings. Check the plugin's repo for more information. A cross platform WhatsApp / Messenger / Slack -style keyboard even. For your Cordova app. Repo: $ ionic cordova plugin add cordova-plugin-native-keyboard $ npm install --save @ionic-native/native-keyboard import { NativeKeyboard } from '@ionic-native/native-keyboard'; constructor(private nativeKeyboard: NativeKeyboard) { } ... showMessenger(options) Show messenger NativeKeyboardOptions hideMessenger(options) Hide messenger showMessengerKeyboard() Programmatically pop up the keyboard again if the user dismissed it. Returns: Promise<any> Promise<any> hideMessengerKeyboard() Programmatically hide the keyboard (but not the messenger bar) updateMessenger(options) Manipulate the messenger while it’s open. For instance if you want to update the text programmatically based on what the user typed. Function A function invoked when the user submits his input. Receives the text as a single property. Make sure your page is UTF-8 encoded so Chinese and Emoji are rendered OK. A function invoked when the keyboard is about to pop up. Receives the height as a single property. (iOS only) A function invoked when the keyboard popped up. Receives the height as a single property. A function invoked when the keyboard is about to close. (iOS only) A function invoked when the keyboard closed. A function invoked when any key is pressed, sends the entire text as response. HTMLElement Highly recommended to pass in if you want to replicate the behavior of the video's above (scroll down when the keyboard opens). Pass in the scrollable DOM element containing the messages. boolean If autoscrollElement was set you can also make the list scroll down initially, when the messenger bar (without the keyboard popping up) is shown. autoscrollElement Setting this to true is like the video's above: the keyboard doesn't close upon submit. Defaults to false. true false Makes the messenger bar slide in from the bottom. Defaults to false. Open the keyboard when showing the messenger. Defaults to false. string The default text set in the messenger input bar. The color of the typed text. Defaults to #444444. #444444 Like a regular HTML input placeholder. The color of the placeholder text. Defaults to #CCCCCC. #CCCCCC The background color of the messenger bar. Defaults to #F6F6F6. #F6F6F6 The background color of the textview. Looks nicest on Android if it's the same color as the backgroundColor property. Defaults to #F6F6F6. backgroundColor The border color of the textview. Defaults to #666666. (iOS only) #666666 number Setting this > 0 will make a counter show up on iOS (and ignore superfluous input on Android, for now) Options are: "none", "split", "countdown", "countdownreversed". Note that if maxChars is set, "none" will still show a counter. Defaults to "none". (iOS only) "none" "split" "countdown" "countdownreversed" maxChars Options are: "default", "decimalpad", "phonepad", "numberpad", "namephonepad", "number", "email", "twitter", "url", "alphabet", "search", "ascii". (iOS only) Options are: "light", "dark". (iOS only) Disables things like the Emoji keyboard and the Predicive text entry bar (iOS only) NativeKeyboardLeftButton NativeKeyboardButton Either "text" (Android only currently), "fontawesome" or "ionicon". Depends on the type. Examples: for "text" use "Send", for "fontawesome" use "fa-battery-quarter", for "ionicon" use "\uf48a" (go to, right-click and inspect the icon and use the value you find in :before). Note that some fonticons are not supported as the embedded fonts in the plugin may lag behind a little. So try one of the older icons first. If type is "text" you can set this to either "normal", "bold" or "italic". A function invoked when the button is pressed. Use this button to prompt the user what he wants to do next by for instance rendering an ActionSheet. Set to true to disable the button once text has been entered. Replace the messenger's text by this. The current text remains if omitted. Position the cursor anywhere in the text range. Defaults to the end of the text. If false or omitted no changes to the keyboard state are made.
https://ionicframework.com/docs/native/native-keyboard/
CC-MAIN-2018-26
refinedweb
664
68.97
send - send a message on a socket #include <sys/socket.h> ssize_t send(int socket, const void *buffer, size_t length, int flags); The send() function shall initiate transmission of a message from the specified socket to its peer. The send() function shall send a message only when the socket is connected (including when the peer of a connectionless socket has been set via connect()). The send() function takes the following arguments: - length of the message to be sent is specified by the length argument. If the message is too long to pass through the underlying protocol, send() shall fail and no data shall socket in use may require the process to have appropriate privileges to use the send() function. Upon successful completion, send() shall return the number of bytes sent. Otherwise, -1 shall be returned and errno set to indicate the error. The send() function shall. None. The send() function is equivalent to sendto() with a null pointer dest_len argument, and to write() if no flags are used. None. None. connect(), getsockopt(), poll(), recv(), recvfrom(), recvmsg(), select(), sendmsg(), sendto(), setsockopt(), shutdown(), socket(), the Base Definitions volume of IEEE Std 1003.1-2001, <sys/socket.h> First released in Issue 6. Derived from the XNS, Issue 5.2 specification.
http://pubs.opengroup.org/onlinepubs/009696799/functions/send.html
CC-MAIN-2017-30
refinedweb
208
62.98
Learning Javascript As a Rubyist Mindy Zwanziger Originally published at Medium on ・3 min read A handful of simplified comparisons to make those first few steps into Javascript a bit easier. I love Ruby. I also recently discovered that some people vehemently dislike it. So here I am, learning Javascript. The people I talked to would probably be appalled that their comments had such an effect on me. Here’s a secret —I still love Ruby and Javascript is the next part of the curriculum at the online school I’m attending (Launch School) so I’d be studying it anyway, but that makes for a much less interesting story. So here I am, learning Javascript. The first few days of diving into a new language feels a little like a firehouse of information being directed at your face. As a former high school teacher, I’m here to tell you how to deal, with a tip — we learn best by connecting new information to old information. So let’s make some connections! Writing Syntax Commenting In Ruby, we use # to make one-line comments, and =begin with =end for our multi-line comments. A Ruby Example: =begin Honestly, I find this a weird syntax for comments. =end To write a comment in Javascript, you use // for a one-liner, and /* with */ for a multi-line comment. A Javascript Example: // I'm a wonderfully useful comment! /* Me Too! */ cAsE In Ruby, we use snake_case for variables, files, directories, etc…, CamelCase for class names, and SCREAMING_SNAKE for constants. Javascript mostly uses camelCase. You’ll see PascalCase with constructors (more about whatever those are soon), and SCREAMING_SNAKE for constants, just like in Ruby. Underscores (_) are for use in constants only. Methods vs. Functions In Ruby, we use the term “method” to denote a function. In Javascript, we see the term “function” more often, as it’s part of the method/function definition. A Ruby Example: def method_name(parameters) # some code end A Javascript Example: function methodName(parameters) { // some code } I find this quote helpful to further clarify this delineation from Tiffany White on a dev.to article earlier this year: In short: a method is a function that belongs to a class. In JavaScript, however, a method is a function that belongs to an object. String Interpolation Both languages provide syntax for inserting variable values into a string. A Ruby Example: site_name = "Medium" "Welcome to #{site_name}" Javascript uses backticks ( `) and a dollar symbol ( $) to create what is called a “Template String Literal” which allows for string interpolation. A Javascript Example: var siteName = 'Medium'; `Welcome to ${siteName}!` Methods, Etc… A great reference point for these is the Mozilla Developer Network (MDN). Here’s the Javascript Reference link. I’d recommend starting by looking at something familiar, like the Array documentation. Let’s start with something familiar — instance methods! Instance method Luckily for us, both Ruby and Javascript use instance methods. A Ruby example: [4, 3, 2].sort A Javascript example: [4, 3, 2].sort // Oh hey, they're the same — NICE. In our handy MDN reference list, You can tell if a method is an instance method if it includes the wordprototype. Static method These are akin to Ruby’s class methods. A Ruby example: Hash.new A Javascript example: String.fromCharCode(65, 66 ,67) // Returns 'ABC' Constructor As an introductory point of reference, constructors can be compared to Ruby’s classes. While Javascript constructors and Ruby classes are very different at the core, they are both the vehicles by which a new object is created, and in that, they are similar. A Ruby example: class Dog def initialize(name, age, breed) @name = name @age = age @breed = breed end end new_dog_object = Dog.new('Coco', 8, 'Mixed') A Javascript example: function Dog(name, age, breed) { this.name = name; this.age = age; this.breed = breed; } newDogObject = new Dog('Coco', 8, 'Mixed'); Other comparisons can be made, of course, but this will give you a great place to start! Code well, my friends! article.stop();
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mindyzwan/learning-javascript-as-a-rubyist-4cbn
CC-MAIN-2020-16
refinedweb
670
65.93
This morning I caught myself thinking that I was running through my head the fragments I liked from the What’s New with TensorFlow? at Google Cloud Next in San Francisco. Then I thought about it for a moment and didn’t say to myself, why not share my ultra-short summary with you (unless you end up wanting to watch the video, and you need to do this: the lecturer is amazing), so we start .. … # 1 It’s a powerful machine learning system TensorFlow is a machine learning system that can be your new best friend if you have a lot of data and desire to comprehend the latest advancement in artificial intelligence: deep learning . Neural networks. Big ones. It’s not a data science magic wand, it’s a whole book of spells … which means you probably need to stop reading if you just want to plot a 20 by 2 table regression line. But tremble if you want more. TensorFlow is being used to hunt new planets , prevent blindness, help doctors scan for diabetic retinopathy, and save forests by alerting authorities to signs of illegal logging . This is what AlphaGo and Google Cloud Vision are built on , and this is your theme. TensorFlow has source code, you can download it for free and get started right away . TensorFlow’s discovery of Kepler-90i makes Kepler-90 the only known system with eight planets around a single star. We haven’t found another system with eight planets yet, so I’m guessing that means we’re still tied to Kepler-90. More details here . # 2 The fancy approach is optional I’m not myself from the TensorFlow Eager . If you’ve tried TensorFlow in the old days and screamed out of it because you had to code as a scientist or as an alien and not as a developer, come back! TensorFlow’s eager execution allows you to interact with it like a real Python programmer: you are given all the immediacy of writing and debugging code line by line, rather than stressful plotting. I myself used to be a scientist (and quite possibly an alien), but I was in awe of the instant performance of TF when it came out. Pleases instantly! # 3 You can build neural networks one by one Keras + TensorFlow = simplified neural network design! Keras is the very convenience and simplicity of prototyping, TensorFlow has long needed them. If you like object oriented thinking and like building neural networks one layer at a time, you will love tf.keras. In a few lines of code below, we have created a sequential neural network with standard whistles and puffs in the form of a drop-down list (somehow I will fall into a lyrical mood and give you my metaphor about drop-down lists, it will have staplers and flu). Oh, you like puzzles, don’t you? Patience. Don’t think too much about staplers. # 4 It’s not just Python Okay, you’ve been complaining about TensorFlow’s obsession with Python for a while now. Good news! TensorFlow isn’t just for pythons now. It now works in many languages, from R to Swift to JavaScript. # 5 You can do everything in the browser Speaking of JavaScript, you can train and execute models in the browser using TensorFlow.js. Chat with cool demos, I’ll wait here for your return. Real-time human pose estimation in the browser using TensorFlow.js. Turn on your camera for a demonstration here. Or stay out of your chair. ¯ \ _ (ツ) _ / ¯ It’s up to you. # 6 There is a Lite version for small devices Have you taken your old computer to the museum? Got a toaster? (Anything as useless?) TensorFlow Lite allows the model to run on a variety of devices, including mobiles and the Internet of Things, giving you over 3x faster output than the original TensorFlow. Yes, you can now get machine learning on your Raspberry Pi or on your phone. In his presentation, Lawrence boldly shows the classification of images on an Android emulator in front of thousands of people … and it works. 1.6 seconds to compute? Yes! Banana more than 97% likely? Yes! Toilet paper? Well, I’ve been to several countries where a piece of paper like the one Lawrence is holding is considered toilet paper. # 7 Specialized equipment has improved If you are tired of waiting for your processor to finish working with your data to train your neural network, you can now use your hardware specially designed to work with cloud TPUs . T means tensor. Like TensorFlow … a coincidence? I don’t think so! A few weeks ago, Google announced the TPU versions in version 3. # 8 New data streams have been greatly improved What are you doing with numpy ? If you want to do it in TensorFlow, but get angry and give up, the tf.data namespace now makes your handling of input in TensorFlow more expressive and efficient. tf.data gives you fast, flexible and easy-to-use data pipelines synchronized with training. # 9 You don’t have to start from scratch You know it’s not very fun to get started with machine learning? Blank new page in your editor and no sample code for miles around. With TensorFlow Hub, you can use a better version of the time-honored tradition of borrowing someone else’s code and calling it your own (otherwise known as professional software development). TensorFlow Hub is a repository for reusable pre-trainable machine learning model components packaged for one-time reuse. Help yourself!
https://www.thinkdataanalytics.com/9-things-you-should-know-about-tensorflow/
CC-MAIN-2021-49
refinedweb
933
71.34
view raw Should I be adding propTypes to parent components and their child components? For example I have a <Header modal={this.state.modal} lives={this.state.lives} score={this.state.score} /> const Header = function(props) { if (props.modal) { return (<Logo logo={logo} />); } else { return ( <div> <Lives lives={props.lives} /> <Score score={props.score} /> </div> ); } }; Header.propTypes = { modal: React.PropTypes.bool.isRequired, lives: React.PropTypes.number.isRequired, score: React.PropTypes.number.isRequired, }; <Lives lives={props.lives} /> <Score score={props.score} /> const Score = function(props) { return ( <p className="score score--right"> {props.score} pts </p> ); }; Score.propTypes = { score: React.PropTypes.number.isRequired, }; In my opinion, you should validate propTypes in every components regardless of whether it has been validated in the parent component. Think like: If a component uses a prop, you should validate the type of the prop One of the major advantage of React is component re-usability. Even in you case, the Score may be only used by Header at the moment, you may later on find it useful in other places and Score is getting score from other sources, you should make sure it receives the correct type. If you are worried the bundled size from these duplication of proptype validations in production, there is a bable plugin to help with that.
https://codedump.io/share/8HFF1DhOwAHZ/1/validate-proptypes-on-parent-and-child-components
CC-MAIN-2017-22
refinedweb
215
53.07
. public static T ConvertTo<T>(object value){ // check for value = null, thx alex; } if (t.IsEnum) // if enum use parse return (T)Enum.Parse(t, value.ToString(), false); else { // if we have a custom type converter then use it TypeConverter td = TypeDescriptor.GetConverter(t); if (td.CanConvertFrom(value.GetType())) { return (T)td.ConvertFrom(value); } else // otherwise use the changetype return (T)Convert.ChangeType(value, t); }} DateTime dt = TypeConversionUtility.ConvertTo<DateTime>(dateVal); Use of included code sample is subject to the terms specified at This posting is provided "AS IS" with no warranties, and confers no rights. If you would like to receive an email when updates are made to this post, please register here RSS Hi. I've found this post very usefull and was able to use it in my application. One thing you may consider is this: When you create TypeDescriptor for the target type, it is not usable in all cases, i.e. converting string to Guid fails. This was my solution: // if we have a custom type converter then use it TypeConverter td = TypeDescriptor.GetConverter(type); if (td.CanConvertFrom(valueType)) { return td.ConvertFrom(value); } //if ConvertFrom is not usable, try the other way around: td = TypeDescriptor.GetConverter(valueType); if (td.CanConvertTo(type)) return td.ConvertTo(value, type); } // if else // otherwise use the changetype {...} PingBack from PingBack from PingBack from PingBack from Comment Policy: No HTML allowed. URIs and line breaks are converted automatically. Your e–mail address will not show up on any public page. dev lead @ microsoft
http://blogs.msdn.com/jongallant/archive/2006/06/19/637023.aspx
crawl-002
refinedweb
252
53.47
Introducing Project Tye Amiee Project Tye Project Tye is an experimental developer tool that makes developing, testing, and deploying microservices and distributed applications easier. When building an app made up of multiple projects, you often want to run more than one at a time, such as a website that communicates with a backend API or several services all communicating with each other. Today, this can be difficult to setup and not as smooth as it could be, and it’s only the very first step in trying to get started with something like building out a distributed application. Once you have an inner-loop experience there is then a, sometimes steep, learning curve to get your distributed app onto a platform such as Kubernetes. The project has two main goals: - Making development of microservices easier by: - Running many services with one command - Using dependencies in containers - Discovering addresses of other services using simple conventions - Automating deployment of .NET applications to Kubernetes by: - Automatically containerizing .NET applications - Generating Kubernetes manifests with minimal knowledge or configuration - Using a single configuration file If you have an app that talks to a database, or an app that is made up of a couple of different processes that communicate with each other, then we think Tye will help ease some of the common pain points you’ve experienced. We have recently demonstrated Tye in a few Build sessions that we encourage you to watch, Cloud Native Apps with .NET and AKS and Journey to one .NET Tour of Tye Installation To get started with Tye, you will first need to have .NET Core 3.1 installed on your machine. Tye can then be installed as a global tool using the following command: dotnet tool install -g Microsoft.Tye --version "0.2.0-alpha.20258.3" Running a single service Tye makes it very easy to run single applications. To demonstrate this: 1. Make a new folder called microservices and navigate to it: mkdir microservices cd microservices 2. Then create a frontend project: dotnet new razor -n frontend 3. Now run this project using tye run: tye run frontend The above displays how Tye is building, running, and monitoring the frontend application. One key feature from tye run is a dashboard to view the state of your application. Navigate to to see the dashboard running. The dashboard is the UI for Tye that displays a list of all of your services. The Bindings column has links to the listening URLs of the service. The Logs column allows you to view the streaming logs for the service. Services written using ASP.NET Core will have their listening ports assigned randomly if not explicitly configured. This is useful to avoid common issues like port conflicts. Running multiple services Instead of just a single application, suppose we have a multi-application scenario where our frontend project now needs to communicate with a backend project. If you haven’t already, stop the existing tye run command using Ctrl + C. 1. Create a backend API that the frontend will call inside of the microservices/ folder. dotnet new webapi -n backend 2. Then create a solution file and add both projects: dotnet new sln dotnet sln add frontend backend You should now have a solution called microservices.sln that references the frontend and backend projects. 3. Run tye in the folder with the solution. tye run The dashboard should show both the frontend and backend services. You can navigate to both of them through either the dashboard of the url outputted by tye run. The backend service in this example was created using the webapi project template and will return an HTTP 404 for its root URL. Getting the frontend to communicate with the backend Now that we have two applications running, let’s make them communicate. To get both of these applications communicating with each other, Tye utilizes service discovery. In general terms, service discovery describes the process by which one service figures out the address of another service. Tye uses environment variables for specifying connection strings and URIs of services. The simplest way to use Tye’s service discovery is through the Microsoft.Extensions.Configuration system – available by default in ASP.NET Core or .NET Core Worker projects. In addition to this, we provide the Microsoft.Tye.Extensions.Configuration package with some Tye-specific extensions layered on top of the configuration system. If you want to learn more about Tye’s philosophy on service discovery and see detailed usage examples, check out this reference document. 1. If you haven’t already, stop the existing tye run command using Ctrl + C. Open the solution in your editor of choice. 2. Add a file WeatherForecast.cs to the frontend project. using System; namespace frontend { public class WeatherForecast { public DateTime Date { get; set; } public int TemperatureC { get; set; } public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); public string Summary { get; set; } } } This will match the backend WeatherForecast.cs.); } } } 4. Add a reference to the Microsoft.Tye.Extensions.Configuration package to the frontend project dotnet add frontend/frontend.csproj package Microsoft.Tye.Extensions.Configuration --version "0.2.0-*" 5. Now wire up the WeatherClient to use the correct URL for the backend service. 6. Add a Forecasts property to the Index page model under Pages\Index.cshtml.cs in the frontend project. ... public WeatherForecast[] Forecasts { get; set; } ... 7. Change the OnGet method to take the WeatherClient to call the backend service and store the result in the Forecasts property: ... public async Task OnGet([FromServices]WeatherClient client) { Forecasts = await client.GetWeatherAsync(); } ... 8.: > 9. Run the project with tye run and the frontend service should be able to successfully call the backend service! When you visit the frontend service you should see a table of weather data. This data was produced randomly in the backend service. The fact that you’re seeing it in a web UI in the frontend means that the services are able to communicate. Unfortunately, this doesn’t work out of the box on Linux right now due to how self-signed certificates are handled, please see the workaround here Tye’s configuration schema Tye has a optional configuration file ( tye.yaml) to enable customizing settings. This file contains all of your projects and external dependencies. If you have an existing solution, Tye will automatically populate this with all of your current projects. To initalize this file, you will need to run the following command in the microservices directory to generate a default tye.yaml file: tye init The contents of the tye.yaml should look like this: The top level scope (like the name node) is where global settings are applied. tye.yaml lists all of the application’s services under the services node. This is the place for per-service configuration. To learn more about Tye’s yaml specifications and schema, you can check it out here in Tye’s repository on Github. We provide a json-schema for tye.yamland some editors support json-schema for completion and validation of yaml files. See json-schema for instructions. Adding external dependencies (Redis) Not only does Tye make it easy to run and deploy your applications to Kubernetes, it’s also fairly simple to add external dependencies to your applications as well. We will now add redis to the frontend and backend application to store data. Tye can use Docker to run images that run as part of your application. Make sure that Docker is installed on your machine. 1. Change the WeatherForecastController.Get() method in the backend project to cache the weather information in redis using an IDistributedCache. 2. Add the following using‘s to the top of the file: using Microsoft.Extensions.Caching.Distributed; using System.Text.Json; 3. Update Get(): [HttpGet] public async Task<string> Get([FromServices]IDistributedCache cache) { var weather = await cache.GetStringAsync("weather"); if (weather == null) { var rng = new Random(); var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }) .ToArray(); weather = JsonSerializer.Serialize(forecasts); await cache.SetStringAsync("weather", weather, new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(5) }); } return weather; } This will store the weather data in Redis with an expiration time of 5 seconds. 4. Add a package reference to Microsoft.Extensions.Caching.StackExchangeRedis in the backend project: cd backend/ dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis cd .. 5. Modify Startup.ConfigureServices in the backend project to add the redis IDistributedCache implementation. public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddStackExchangeRedisCache(o => { o.Configuration = Configuration.GetConnectionString("redis"); }); } The above configures redis to the configuration string for the redis service injected by the tye host. 6. Modify tye.yaml to include redis as a dependency. name: microservice services: - name: backend project: backend\backend.csproj - name: frontend project: frontend\frontend.csproj - name: redis image: redis bindings: - port: 6379 connectionString: "${host}:${port}" - name: redis-cli image: redis args: "redis-cli -h redis MONITOR" We’ve added 2 services to the tye.yaml file. The redis service itself and a redis-cli service that we will use to watch the data being sent to and retrieved from redis. The "${host}:${port}"format in the connectionStringproperty will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis. 7. Run the tye command line in the solution root Make sure your command-line is in the microservices/directory. One of the previous steps had you change directories to edit a specific project. tye run Navigate to to see the dashboard running. Now you will see both redis and the redis-cli running listed in the dashboard. Navigate to the frontend application and verify that the data returned is the same after refreshing the page multiple times. New content will be loaded every 5 seconds, so if you wait that long and refresh again, you should see new data. You can also look at the redis-cli logs using the dashboard and see what data is being cached in redis. The "${host}:${port}"format in the connectionStringproperty will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis. Deploying to Kubernetes Tye makes the process of deploying your application to Kubernetes very simple with minimal knowlege or configuration required. Tye will use your current credentials for pushing Docker images and accessing Kubernetes clusters. If you have configured kubectl with a context already, that’s what tye deployis going to use! Prior to deploying your application, make sure to have the following: - Docker installed based off on your operating system - A container registry. Docker by default will create a container registry on DockerHub. You could also use Azure Container Registry (ACR) or another container registry of your choice. - A Kubernetes Cluster. There are many different options here, including: - Kubernetes in Docker Desktop - Azure Kubernetes Service - Minikube - K3s – a lightweight single-binary certified Kubernetes distribution from Rancher. - Another Kubernetes provider of your choice. If you choose a container registry provided by a cloud provider (other than Dockerhub), you will likely have to take some steps to configure your kubernetes cluster to allow access. Follow the instructions provided by your cloud provider. Deploying Redis tye deploy will not deploy the redis configuration, so you need to deploy it first by running: kubectl apply -f This will create a deployment and service for redis. Tye deploy You can deploy your application by running the follow command: tye deploy --interactive Enter the Container Registry (ex: example.azurecr.iofor Azure or examplefor dockerhub): You will be prompted to enter your container registry. This is needed to tag images, and to push them to a location accessible by kubernetes. If you are using dockerhub, the registry name will be your dockerhub username. If you are using a standalone container registry (for instance from your cloud provider), the registry name will look like a hostname, eg: example.azurecr.io. You’ll also be prompted for the connection string for redis. Enter the following to use the instance that you just deployed: redis:6379 tye deploy will create Kubernetes secret to store the connection string. –interactive is needed here to create the secret. This is a one-time configuration step. In a CI/CD scenario you would not want to have to specify connection strings over and over, deployment would rely on the existing configuration in the cluster. Tye uses Kubernetes secrets to store connection information about dependencies like redis that might live outside the cluster. Tye will automatically generate mappings between service names, binding names, and secret names. tye deploy does many different things to deploy an application to Kubernetes. It will: - Create a docker image for each project in your application. - Push each docker image to your container registry. - Generate a Kubernetes Deploymentand Servicefor each project. - Apply the generated Deploymentand Serviceto your current Kubernetes context. You should now see three pods running after deploying. kubectl get pods NAME READY STATUS RESTARTS AGE backend-ccfcd756f-xk2q9 1/1 Running 0 85m frontend-84bbdf4f7d-6r5zp 1/1 Running 0 85m redis-5f554bd8bd-rv26p 1/1 Running 0 98m You can visit the frontend application, you will need to port-forward to access the frontend from outside the cluster. kubectl port-forward svc/frontend 5000:80 Now navigate to to view the frontend application working on Kubernetes. Currently tye does not automatically enable TLS within the cluster, and so communication takes place over HTTP instead of HTTPS. This is typical way to deploy services in kubernetes – we may look to enable TLS as an option or by default in the future. Adding a registry to tye.yaml If you want to use tye deploy as part of a CI/CD system, it’s expected that you’ll have a tye.yaml file initialized. You will then need to add a container registry to tye.yaml. Based on what container registry you configured, add the following line in the tye.yaml file: registry: <registry_name> Now it’s possible to use tye deploy without --interactive since the registry is stored as part of configuration. This step may not make much sense if you’re using tye.yaml to store a personal Dockerhub username. A more typical use case would storing the name of a private registry for use in a CI/CD system. For a conceptual overview of how Tye behaves when using tye deploy for deployment, check out this document. Undeploying your application After deploying and playing around with the application, you may want to remove all resources associated from the Kubernetes cluster. You can remove resources by running: tye undeploy This will remove all deployed resources. If you’d like to see what resources would be deleted, you can run: tye undeploy --what-if Follow up If you want to experiment more with using Tye, we have a variety of different sample applications and tutorials that you can walk through, check them out down below: Tye Roadmap We have been diligently working on adding new capabilities and integrations to continuously improve Tye. Here are some integrations below that we have recently released. There is also information provided on how to get started for each of these: - Ingress – to expose pods/services created to the public internet. - Redis – to store data, cache, or as a message broker. - Dapr – for integrating a Dapr application with Tye. - Zipkin – using Zipkin for distributed tracing. - Elastic Stack – using Elastic Stack for logging. While we are excited about the promise Tye holds, it’s an experimental project and not a committed product. During this experimental phase we expect to engage deeply with anyone trying out Tye to hear feedback and suggestions. The point of doing experiments in the open is to help us explore the space as much as we can and use what we learn to determine what we should be building and shipping in the future. Project Tye is currently commited as an experiment until .NET 5 ships. At which point we will be evaluating what we have and all that we’ve learnt to decide what we should do in the future. Our goal is to ship every month, and some new capabilities that we are looking into for Tye include: - More deployment targets - Sidecar support - Connected development - Database migrations Conclusion We are excited by the potential Tye has to make developing distributed applications easier and we need your feedback to make sure it reaches that potential. We’d really love for you to try it out and tell us what you think, there is a link to a survey on the Tye dashboard that you can fill out or you can create issues and talk to us on GitHub. Either way we’d love to hear what you think. If you run this on Windows and get an Exception run first. What an amazing Project, I hope I get the time to dig deeper into tye and ASPnetcore 🙂 Hi Jakob, Definitely let us know what you think as you dig deeper into Tye! There’s a survey on the dashboard that you can fill out, we would love your feedback. If you come across any issues with Tye, feel free to create an issue on Github as well! 🙂 Really excited about this, hope it progresses quickly and we can use it in production This is completely amazing and much needed. It’s already very good! Wow, this could obsolete my batch script to run all services in windows terminal Hi, This is brilliant stuff. Can I just ask, after following the steps above, how do I un-deploy the Redis pod? Just in case I’m not the only one, this seems to work. Yeah, the reason that tye won’t un-deploy redis is that tye deploy didn’t actually deploy redis. The command above will remove redis, thanks for calling that out! this comment has been deleted. great article! Are there any preferred ways of having a seeded database? Was considering a docker image with the database included, is that wierd? Also, how do I include a non blazor front-end? I have a spa in Vue that I like to have yarn served somehow… Thank for any tips
https://devblogs.microsoft.com/aspnet/introducing-project-tye/
CC-MAIN-2020-34
refinedweb
3,043
56.55
Hi!I'm glad to announce the first release of the checkpoint-restore tool.This project is an attempt to implement the checkpoint-restore functionalityfor processes and containers without driving too much code into the kernel tree,but putting there "various oddball helper code" instead.The tool can already be used for checkpointing and restoring various individualapplications. And the greatest thing about this so far is that most of the belowfunctionality has the required kernel support in the recently released v3.5!So, we support now* x86_64 architecture* process' linkage* process groups and sessions (without ttys though :\ )* memory mappings of any kind (shared, file, etc.)* threads* open files (shared between tasks and partially opened-and-unlinked)* pipes and fifos with data* unix sockets with packet queues contents* TCP and UDP sockets (TCP connections support exists, but needs polishing)* inotifies, eventpoll and eventfd* tasks' sigactions setup, credentials and itimers* IPC, mount and PID namespacesThough namespaces support is in there, we do not yet support an LXC container c/r,but we're close to it :)I'd like to thank everyone who took part in new kernel APIs discussions, thefeedback was great! Special thanks goes to Linus for letting the kernel partsin early, instead of making them sit out of tree till becoming stable enough.Tarball with the tool sources is at git repo is at some sort of docs growing at are still things for which we don't have the kernel support merged (SysVIPCand various anon file descriptors, i.e. inotify, eventpoll, eventfd) yet. We havethe kernel branch with the stuff applied available at
https://lkml.org/lkml/2012/7/23/58
CC-MAIN-2018-51
refinedweb
265
54.86
When applying for developer roles, the interviewer might ask you to solve coding problems and some of the most basic ones include graph algorithms like BFS, DFS and Dijkstra’s algorithm. You should have a clear understanding of graph algorithms and their data structures if you want to perform well on those challenges. This article will give you an idea of the well-known graph algorithms and data structures to ace your interview. Let’s first cover what a graph data structure is. It is a data structure that stores data in the form of interconnected edges (paths) and vertices (nodes). These data structures have a lot of practical applications. For instance, Facebook’s Graph API is a perfect example of the application of graphs to real-life problems. Everything is a vertice or an edge on the Graph API. A vertice is anything that has some characteristic properties and can store data. The vertices of Facebook Graph API are Pages, Places, Users, Events, Comments, etc. On the other hand, every connection is an edge. Examples of Graph API edges are a user posting a comment, photo, video, etc. Common Operations On Graph Data Structures A graph data structure (V, E) consists of: - A collection of nodes or vertices (V) - A collection of paths or edges (E) You can manage the graph data structures using the common operations mentioned below. - contains - It checks if a graph has a certain value. - addNode - It adds vertices to the graphs. - removeNode - It removes the vertices from the graphs. - hasEdge - It checks if a path or a connection exists between any two vertices in a graph. - addEdge - It adds paths or links between vertices in graphs. - removeEdge - It removes the paths or connections between vertices in a graph. Fundamental Graph Algorithms Let’s look at some graph algorithms along with their respective data structures and code examples. Breadth-First Search (BFS) It is a graph traversal algorithm that traverses the graph from the nearest node (root node) and explores all unexplored (neighboring) nodes. You can consider any node in the graph as a root node when using BFS for traversal. You can think of BFS as a recursive algorithm that searches all the vertices of a graph or tree data structure. It puts every vertex of the graph into the following categories. - Visited - Non-visited BFS has a wide variety of applications. For instance, it can create web page indexes in web crawlers. It can also find the neighboring locations from a given source location. The breadth-first search uses Queue data structure to find the shortest path in a given graph and makes sure that every node is visited not more than once. Code Example Below is a Python code example that traverses a graph using the Breadth-first search algorithm. # Using a BFS algorithm import collections def bfs(graph, root): visited, queue = set(), collections.deque([root]) visited.add(root) while queue: # Dequeuing a vertex from queue vertex = queue.popleft() print(str(vertex) + " ", end="") # If not visited, marking it as visited, and # enqueuing it for neighbor in graph[vertex]: if neighbor not in visited: visited.add(neighbor) queue.append(neighbor) if __name__ == '__main__': graph = {0: [1, 2], 1: [2], 2: [3], 3: [1, 2]} print("Below is the Breadth First Traversal: ") bfs(graph, 0) The output is as: Depth First Search (DFS) The depth-first search algorithm starts the traversal from the initial node of a given graph and goes deeper until we find the target node or the leaf node (with no children). DFS then backtracks from the leaf node towards the most recent node to explore it. The Depth-first search algorithm uses Stack data structure. It traverses from an arbitrary node, marks the node, and moves to the adjacent unmarked node. Code Example Below is the code example that traverses a graph using the Depth-first search algorithm. # Using') The output is as: Dijkstra Algorithm Dijkstra algorithm is a shortest path algorithm that computes the shortest path between the source node (given node) and all other neighboring nodes in a given graph. It uses the edges’ weights to find a path that minimizes the weight (total distance) between the source node and all other nodes. The most commonly used data structure for the Dijkstra algorithm is Minimum Priority Queue. It is because the operations of this algorithm match with the specialty of a minimum priority queue. The minimum priority queue is a data structure that manages a list of values (keys) and prioritizes the elements with minimum value. It supports the operations like getMin(), extractMin(), insert(element) etc. Code Example Below is the code example that computes the shortest distance from every node (source node) to all neighboring nodes using the Dijkstra algorithm. import sys # Providing the graph with vertices and edges v_graph = [[0, 0, 1, 1, 0, 0, 0], [0, 0, 1, 0, 0, 1, 0], [1, 1, 0, 1, 1, 0, 0], [1, 0, 1, 0, 0, 0, 1], [0, 0, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 0, 1], [0, 0, 0, 1, 0, 1, 0]] e_graph = [[0, 0, 1, 2, 0, 0, 0], [0, 0, 2, 0, 0, 3, 0], [1, 2, 0, 1, 3, 0, 0], [2, 0, 1, 0, 0, 0, 1], [0, 0, 3, 0, 0, 2, 0], [0, 3, 0, 0, 2, 0, 1], [0, 0, 0, 1, 0, 1, 0]] # Finding the vertex that has to be visited next def node_to_visit(): global visited_and_distance v = -10 for index in range(vertices): if visited_and_distance[index][0] == 0 \ and (v < 0 or visited_and_distance[index][1] <= visited_and_distance[v][1]): v = index return v vertices = len(v_graph[0]) visited_and_distance = [[0, 0]] for i in range(vertices-1): visited_and_distance.append([0, sys.maxsize]) for vertex in range(vertices): # Finding next vertex to be visited to_visit = node_to_visit() for neighbor_index in range(vertices): # Updating new distances if v_graph[to_visit][neighbor_index] == 1 and \ visited_and_distance[neighbor_index][0] == 0: new_distance = visited_and_distance[to_visit][1] \ + e_graph[to_visit][neighbor_index] if visited_and_distance[neighbor_index][1] > new_distance: visited_and_distance[neighbor_index][1] = new_distance visited_and_distance[to_visit][0] = 1 i = 0 # Printing the distance for distance in visited_and_distance: print("Computed Distance of ", chr(ord('a') + i), " from source vertex is: ", distance[1]) i = i + 1 The output is as: Bellman-Ford Algorithm Like Dijkstra’s algorithm, it is also a single-source shortest path algorithm. It computes the shortest distance from a single vertex to all other vertices in a weighted graph. Bellman ford’s algorithm guarantees the correct answer even if the weighted graph has negatively weighted edges. However, the Dijkstra algorithm can not guarantee an accurate result in the case of negative edge weights. Code Example Below is the code example that computes the shortest distance from a single vertex to other vertices using the Bellman-Ford algorithm. # Using Bellman-Ford Algorithm class Graph: def __init__(self, vertices): self.V = vertices # Vertices in the graph self.graph = [] # Array of edges # Adding edges def add_edge(self, s, d, w): self.graph.append([s, d, w]) # Printing the solution def print_solution(self, dist): print("Vertex Distance from Source") for i in range(self.V): print("{0}\t\t{1}".format(i, dist[i])) def bellman_ford(self, src): # Filling the distance array and predecessor array dist = [float("Inf")] * self.V # Marking the source vertex dist[src] = 0 # Relaxing edges |V| - 1 times for _ in range(self.V - 1): for s, d, w in self.graph: if dist[s] != float("Inf") and dist[s] + w < dist[d]: dist[d] = dist[s] + w # Step 3: Detecting negative cycle in a graph for s, d, w in self.graph: if dist[s] != float("Inf") and dist[s] + w < dist[d]: print("Graph contains negative weight cycle") return # No negative weight cycle found! # Printing the distance and predecessor array self.print_solution(dist) g = Graph(5) g.add_edge(0, 1, 5) g.add_edge(0, 2, 4) g.add_edge(1, 3, 3) g.add_edge(2, 1, 6) g.add_edge(3, 2, 2) g.bellman_ford(0) The output is as: Floyd Warshall Algorithm The Floyd Warshall algorithm finds the shortest distance between every pair of vertices in a given weighted graph and solves the All Pairs Shortest Path problem. You can use it for both directed and undirected weighted graphs. Weighted graphs are the graphs in which edges have numerical values associated with them. Other names for the Floyd Warshall algorithm are the Roy-Warshall algorithm and the Roy-Floyd algorithm. Code Example Below is the code example that finds the shortest distance in a weighted graph using the Floyd Warshall algorithm. # Total number of vertices vertices = 4 INF = 999 # implementing the floyd-warshall algorithm def floyd_warshall(Graph): distance = list(map(lambda a: list(map(lambda b: b, a)), Graph)) # Adding the vertices individually for k in range(vertices): for a in range(vertices): for b in range(vertices): distance[a][b] = min(distance[a][b], distance[a][k] + distance[k][b]) solution(distance) # Printing the desired solution def solution(distance): for a in range(vertices): for b in range(vertices): if(distance[a][b] == INF): print("INF", end=" ") else: print(distance[a][b], end=" ") print(" ") Graph = [[0, 3, INF, 5], [2, 0, INF, 4], [INF, 1, 0, INF], [INF, INF, 2, 0]] floyd_warshall(Graph) The output is as: Prim’s Algorithm It is a greedy algorithm that finds the minimum spanning tree for a weighted undirected graph. Let’s look at the main terms associated with this algorithm. - Spanning Tree - It is the subgraph of an undirected connected graph. - Minimum Spanning Tree - It is the spanning tree with the minimum sum of the weights of the edges. Prim’s algorithm traverses the adjacent nodes with all connecting edges at every step. It has many applications such as: - Making network cycles - Laying down electrical wiring cables - Network designing Code Example Below is the code example that finds all edges with their respective weights using Prim’s algorithm. INF = 9999999 # graph's vertices vertices = 5 # graph representation graph = [[0, 9, 75, 0, 0], [9, 0, 95, 19, 42], [75, 95, 0, 51, 66], [0, 19, 51, 0, 31], [0, 42, 66, 31, 0]] # creating an array for tracking the selected vertex selected_vertex = [0, 0, 0, 0, 0] # setting the number of edges to 0 number_of_edges = 0 # choosing 0th vertex and setting it to True selected_vertex[0] = True # printing the edge and the corresponding weight print("Edge : Weight\n") while (number_of_edges < vertices - 1): min = INF a = 0 b = 0 for i in range(vertices): if selected_vertex[i]: for j in range(vertices): if ((not selected_vertex[j]) and graph[i][j]): if min > graph[i][j]: min = graph[i][j] a = i b = j print(str(a) + "-" + str(b) + ":" + str(graph[a][b])) selected_vertex[b] = True number_of_edges += 1 The output is as: Kruskal’s Algorithm It finds the minimum spanning tree for a connected weighted graph. Its main objective is to find the subset of the edges through which we can traverse every graph’s vertex. This algorithm uses the greedy approach to find the optimum solution at every stage rather than focusing on a global optimum. Kruskal’s algorithm starts from the edges having the lowest weights and keeps adding the edges until you achieve the goal. Below are the steps to implement this algorithm. - Sort all edges in the ascending order of weight (low to high). - Take the lowest weight edge and add it to the spanning tree. Reject the edge that creates a cycle when added. - Keep adding the edges until we cover all vertices and make a minimum spanning tree. Code Example Below is the code example that finds a subset of edges and computes a minimum spanning tree using Kruskal’s algorithm. class Graph: def __init__(self, vertices): self.V = vertices self.graph = [] def add_edge(self, u, v, w): self.graph.append([u, v, w]) # Search function def find(self, parent, i): if parent[i] == i: return i return self.find(parent, parent[i]) def apply_union(self, parent, rank, x, y): xroot = self.find(parent, x) yroot = self.find(parent, y) if rank[xroot] < rank[yroot]: parent[xroot] = yroot elif rank[xroot] > rank[yroot]: parent[yroot] = xroot else: parent[yroot] = xroot rank[xroot] += 1 # Applying Kruskal’s algorithm def kruskal_algo(self): result = [] i, e = 0, 0 self.graph = sorted(self.graph, key=lambda item: item[2]) parent = [] rank = [] for node in range(self.V): parent.append(node) rank.append(0) while e < self.V - 1: u, v, w = self.graph[i] i = i + 1 x = self.find(parent, u) y = self.find(parent, v) if x != y: e = e + 1 result.append([u, v, w]) self.apply_union(parent, rank, x, y) for u, v, weight in result: print("%d - %d: %d" % (u, v, weight)) g = Graph(6) g.add_edge(0, 1, 4) g.add_edge(0, 2, 4) g.add_edge(1, 2, 2) g.add_edge(1, 0, 4) g.add_edge(2, 0, 4) g.add_edge(2, 1, 2) g.add_edge(2, 3, 3) g.add_edge(2, 5, 2) g.add_edge(2, 4, 4) g.add_edge(3, 2, 3) g.add_edge(3, 4, 3) g.add_edge(4, 2, 4) g.add_edge(4, 3, 3) g.add_edge(5, 2, 2) g.add_edge(5, 4, 3) g.kruskal_algo() The output is as: Topological Sort Algorithm It is a linear ordering of the vertices of a directed acyclic graph (DAG) in which vertex x occurs before vertex y when ordering the directed edge xy from vertex x to vertex y. For instance, the graph’s vertices can represent jobs to be completed, and the edges can depict the requirements that one task must be completed before another. Code Example Below is the code example that linearly orders the vertices using the Topological Sort algorithm. from collections import defaultdict # making a Class for representing a graph class Graph: def __init__(self, vertices): # dictionary that contains adjacency List self.graph = defaultdict(list) # number of vertices self.V = vertices # function for adding an edge to graph def addEdge(self, u, v): self.graph[u].append(v) # A recursive function used by topologicalSort def topologicalSortUtil(self, v, visited, stack): # Marking the current node as visited. visited[v] = True for i in self.graph[v]: if visited[i] == False: self.topologicalSortUtil(i, visited, stack) # Pushing current vertex to stack that stores result stack.append(v) # Topological Sort function. def topologicalSort(self): # Marking all the vertices as not visited visited = [False]*self.V stack = [] # the sort starts from all vertices one by one for i in range(self.V): if visited[i] == False: self.topologicalSortUtil(i, visited, stack) # Printing contents of the stack print(stack[::-1]) g = Graph(6) g.addEdge(5, 2) g.addEdge(5, 0) g.addEdge(4, 0) g.addEdge(4, 1) g.addEdge(2, 3) g.addEdge(3, 1) print ("The topological sort of the given graph is as") g.topologicalSort() The output is as: Johnson’s Algorithm Johnson’s algorithm finds the shortest path between every pair of vertices in a given weighted graph where weights can also be negative. It uses the technique of reweighting. If a given graph has non-negative edge weights, we find the shortest path between all pairs of vertices by running Dijkstra’s algorithm. However, if a graph contains negatively weighted edges, we calculate a new set of non-negative weighted edges and use the same method. Code Example Below is the code example that computes the shortest distance by sorting vertices using Johnson’s algorithm. from collections import defaultdict MAX_INT = float('Inf') # the function returns the vertex with minimum weight def min_distance(dist, visited): (min, min_vertex) = (MAX_INT, 0) for v in range(len(dist)): if min > dist[v] and visited[v] == False: (min, min_vertex) = (dist[v], v) return min_vertex # removing negative weights def Dijkstra_algo(graph, modified_graph, src): # vertices in the graph vertices = len(graph) # Dictionary for checking if a given vertex is # in the shortest path tree sptSet = defaultdict(lambda : False) # computing distance of all vertices from the source for count in range(vertices): # The current vertex that is not yet included in the # shortest path tree currentV = min_distance(distance, sptSet) sptSet[currentV] = True for v in range(vertices): if ((sptSet[v] == False) and (distance[v] > (distance[currentV] + modified_graph[currentV][v])) and (graph[currentV][v] != 0)): distance[v] = (distance[currentV] + modified_graph[currentV][v]); # Printing the Shortest distance from the source for v in range(vertices): print ('Vertex ' + str(v) + ': ' + str(distance[v])) # computing shortest distances def Bellman_algo(edges, graph, vertices): # Adding a source and calculating its min # distance from other nodes distance = [MAX_INT] * (vertices + 1) distance[vertices] = 0 for a in range(vertices): edges.append([vertices, a, 0]) for a in range(vertices): for (src, des, weight) in edges: if((distance[src] != MAX_INT) and (distance[src] + weight < distance[des])): distance[des] = distance[src] + weight return distance[0:vertices] # Function for implementing Johnson Algorithm def Johnson_algo(graph): edges = [] # Creating a list of edges for Bellman-Ford Algorithm for a in range(len(graph)): for b in range(len(graph[a])): if graph[a][b] != 0: edges.append([a, b, graph[a][b]]) # Weights for modifying the original weights modify_weights = Bellman_algo(edges, graph, len(graph)) modified_graph = [[0 for x in range(len(graph))] for y in range(len(graph))] # Modifying the weights to get rid of negative weights for a in range(len(graph)): for b in range(len(graph[a])): if graph[a][b] != 0: modified_graph[a][b] = (graph[a][b] + modify_weights[a] - modify_weights[b]); print ('The modified graph is as: ' + str(modified_graph)) # Running Dijkstra for every vertex for src in range(len(graph)): print ('\nThe shortest distance with vertex ' + str(src) + ' as the source is as:\n') Dijkstra_algo(graph, modified_graph, src) graph = [[0, -5, 2, 3], [0, 0, 4, 0], [0, 0, 0, 1], [0, 0, 0, 0]] Johnson_algo(graph) The output is as: Kosaraju’s Algorithm It is a depth-first search-based algorithm that finds the strongly connected components in a graph. Kosaraju’s algorithm is based on the concept that if one can reach a vertex y starting from vertex x, one should reach vertex x starting from vertex y. If it is the case, we can say that the vertices x and y are strongly connected. Code Example Below is the code example that determines whether the graph is strongly connected using Kosaraju’s algorithm. # Using Kosaraju's algorithm to check if a given directed graph is # strongly connected or not from collections import defaultdict class Graph: def __init__(self, vertices): # vertices of a graph self.V = vertices # dictionary for storing a graph self.graph = defaultdict(list) # function for adding an edge to graph def addEdge(self, u, v): self.graph[u].append(v) # A function for performing DFS def DFSUtil(self, v, visited): # Marking the current node as visited visited[v] = True for i in self.graph[v]: if visited[i] == False: self.DFSUtil(i, visited) # Function returning the transpose of this graph def getTranspose(self): g = Graph(self.V) for i in self.graph: for j in self.graph[i]: g.addEdge(j, i) return g # main function returns true if the graph is strongly connected def isSC(self): # Marking all the vertices as not visited for 1st DFS visited =[False]*(self.V) # Performing DFS traversal starting from the first vertex. self.DFSUtil(0,visited) # Return false if DFS traversal doesn't visit all vertices if any(i == False for i in visited): return False # Creating a reversed graph gr = self.getTranspose() # Marking all the vertices as not visited for 2nd DFS visited =[False]*(self.V) # Doing DFS for the reversed graph starting from the first vertex. # Starting Vertex must be same starting point of first DFS gr.DFSUtil(0,visited) # returning false if all vertices are not visited in second DFS if any(i == False for i in visited): return False return True # Considering a random graph g1 = Graph(5) g1.addEdge(0, 1) g1.addEdge(1, 2) g1.addEdge(2, 3) g1.addEdge(3, 0) g1.addEdge(2, 4) g1.addEdge(4, 2) print ("Yes the graph is strongly connected." if g1.isSC() else "Not strongly connected") g2 = Graph(4) g2.addEdge(0, 1) g2.addEdge(1, 2) g2.addEdge(2, 3) print ("Yes the graph is strongly connected." if g2.isSC() else "Not strongly connected") The output is as: Conclusion So far, we discussed that graphs are non-linear data structures that consist of nodes and edges. We can use graphs to solve many real-life problems. For instance, they are used in social networking sites like Facebook, Linkedin, etc. Each person can be denoted with a vertex on Facebook. Likewise, each node can contain information like name, gender, personID, etc. Further, we discussed some well-known graph algorithms like: - Breadth-First Search - Computes the shortest path in a given graph using a Queue data structure. - Depth-First Search - Uses a Stack data structure to find the shortest path in a given graph. - Dijkstra Algorithm - Uses a Minimum priority queue data structure to find the shortest path between the source node and other given nodes in a graph. - Bellman-Ford Algorithm - Single source shortest path algorithm like Dijkstra’s algorithm. - Floyd Warshall Algorithm - Solves the All Pairs shortest path problem. - Prim’s Algorithm - It finds the minimum spanning tree for an undirected weighted graph. - Kruskal’s Algorithm - It finds the minimum spanning tree for a connected weighted graph. - Topological Sort Algorithm - Represents a linear ordering of vertices in a directed acyclic graph. - Johnson’s Algorithm - Finds the shortest path between every pair of vertices where weights can also be negative. - Kosaraju’s Algorithm - Finds the strongly connected components in a graph. If you need more info or help with understanding a graph algorithm, join the Memgraph Discord server and ask away.
https://memgraph.com/blog/graph-algorithms-cheat-sheet-for-coding-interviews
CC-MAIN-2022-33
refinedweb
3,617
55.24
view raw I am using an array and I tried this code: #include <stdio.h> #include <stdlib.h> int main() { char **q = (char*)malloc(1*sizeof(char*)); q[0]="So Many Books"; q[1]="So Many Books"; q[2]="So Many Books"; q[3]="So Many Books"; q[4]="So Many Books"; printf("%s\n",q[0]); printf("%s\n",q[1]); printf("%s\n",q[2]); printf("%s\n",q[3]); printf("%s\n",q[4]); return 0; } Why is the compiler not giving me an error here Simply because, the issue here is not related to any syntactical error, it's a logical error that is beyond the jurisdiction of a compiler-error-check. The problem here is, apart from index 0, any other index is out of bound access here. There is nothing in C standard to stop you from doing that, but accessing out of bound memory invokes undefined behavior, hence the program output is also, undefined. Anything could happen. Just don't do that. Remember, just because you can do something (write code to access out of bound memory) does not mean you should be doing that. That said, please see this discussion on why not to cast the return value of malloc() and family in C..
https://codedump.io/share/OwDS2cMJFjui/1/dynamic-array-in-c
CC-MAIN-2017-22
refinedweb
212
60.55
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you may not be able to execute some actions. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). So in Sim4Life I can create a line and through the GUI it is possible to give this line a certain thickness (using the modify tab and clicking thicken wire). However I cannot seem to find this option in the Sim4Life API, does anyone know if this is possible? I'm using V3.4.1.2244 So a snippet of the code I use is given below import s4l_v1.model as model from s4l_v1.model import Vec3 import math Wire1 = model.CreateArc(Vec3(0,0,0), 10, float(10)/180*math.pi,float(350)/180*math.pi) Wire1.Name = 'MetalWire' # Thicken Wire... So I sort of found the solution... import s4l_v1.model as model from s4l_v1.model import Vec3 import math import XCoreModeling def get_spline_length(spline): wire = XCoreModeling.GetWires(spline)[0] return wire.GetLength() def Thicken_Wire(spline,radius): wire = XCoreModeling.GetWires(spline)[0] target_curve = wire.GetGeometry(transform_to_model_space=True) # compute the length of the spline height = get_spline_length(spline) # get the Vec3 of the starting and end points startVec = target_curve.Eval(0.0) endVec = target_curve.Eval(height) # create a cylinder that will be bend to the spline, its height must be the same as the length of the spline Wire1 = XCoreModeling.CreateSolidCylinder(startVec,Vec3(startVec[0]-height,startVec[1],startVec[2]),radius) # do the bending XCoreModeling.BendBodyToCurve(Wire1,startVec,Vec3(startVec[0]-height,startVec[1],startVec[2]),Vec3(0,0,1),target_curve) # remove the used spline spline.Delete() return Wire1 # example spline1 = model.CreateArc(Vec3(0,0,0), radius, float(10)/180*math.pi,float(350)/180*math.pi) Wire1 = Thicken_Wire(spline1,1) As you noticed, the Thicken Wire tool is not available in the Python API at the moment (Sim4Life 4.0.1). Thank you for sharing your clever solution! Thanks, I was wondering if there is a way for me to flag this as 'solved'? First, the post as to be "Asked as Question" (unfortunately, I haven't found a way to make that the default behavior). Only then can the post be marked as "Solved". These options are available in "Topic Tools". Okay good to know!
https://forum.zmt.swiss/topic/62/creating-a-wire-from-a-line-arc
CC-MAIN-2021-17
refinedweb
391
61.63
It's ok. Thanks anyway. It's ok. Thanks anyway. You mean? I just want to add/insert the at the end of the recordset displayed on JTable. When I insert new data, it will automatically sort the data displayed on JTable. Why does tablemodel automatically sort the data in database? I want to insert a data at the end of the resultset/table. How can I do it? THanks!!! I thought firing events from the table model can do it alone. I don't have any remove command in there, I just used direct deletion from database. Is that wrong? Also, I didn't use vectors. Sorry if I'm confusing you. Yeah. That's what I want. Just like in visual basic 6's adodc/datagrid. Immediate response to change with ".refresh". It is exactly what I choose to delete. The record from database is what I always successfully deleted and JTable is not even responding immediately to the change.. #:-s It works but won't take effect until I delete it again or delete some other record. I don't use any method to delete record from JTable. Just the database. Do you get me? --- Update --- It... Sir, I used rs.deleteRow() to remove a row.. What do you mean? It is stated in the code above in "btnDelete". I'm sorry. I forgot. :o model.connectDB(); if(e.getSource()==btnAdd) { String name = JOptionPane.showInputDialog(null,"Enter here: "); sql = "Insert into friends... The problem that I am now facing with that program is when I add/delete, the refresh/update of the JTable comes late. I need to do it two or more times for the model to apply the changes. Seems like... What can be the problem with my code here? I'm trying to refresh/update the table connected to database. import java.awt.*; import java.awt.event.*; import javax.swing.*; import...
http://www.javaprogrammingforums.com/search.php?s=5fd59c4f5f55d4ed25b14cf219a898e5&searchid=1814232
CC-MAIN-2015-40
refinedweb
317
78.35
Save 37% off Clojure: The Essential Reference. Just enter code fccborgatti into the promotional discount code box at checkout at manning.com. The fold Function function since 1.5 Listing 1 Parallel processing, Reduce-Combine, Fork-Join (fold ([reducef coll]) ([combinef reducef coll] ) ([n combinef reducef coll])) In its simplest form, fold takes a reducing function (a function supporting at least two arguments) and a collection. If the input collection type supports parallel folding (currently vectors, maps, and foldcat objects), it splits the input collection into chunks of roughly the same size and executes the reducing function on each partition in parallel (and on multiple CPU cores when possible). It then combines the results back into the final output: (require '[clojure.core.reducers :as r]) ❶ (r/fold + (into [] (range 1000000))) ❷ ;; 499999500000 ❶ Reducers are bundled with Clojure, but they need to be required before use. ❷ fold splits the one million elements vector into chunks of roughly 512 each (the default). Chunks are then sent to the fork-join thread pool for parallel execution where they’re reduced by + and combined back again by + fold offers parallelism based on “divide and conquer”: chunks of work are created and computation happens in parallel while, at the same time, finished tasks are combined back into the final result. The following diagram illustrates the journey of a collection going through a fold operation: Figure 1. How the fork-join model to reduce-combine in parallel. An important mechanism that fold implements (the diagram can’t show this clearly without being confusing) is work-stealing. After fold sends a chunk to the Java fork-join framework, each worker could further splits the work into smaller pieces, generating a mix of smaller and larger chunks. When free, a worker can “steal” work from another—the Fork-Join model for parallel computation is a complicated subject that can’t be illustrated in this article. If you want to know more, please read the following paper by Doug Lea, the author of Fork-join in Java:. Work-stealing improves over basic thread-pooling, such as for less predictable jobs keeping one thread unexpectedly busy. Contract INPUT The contract is different based on the presence of the optional “combinef” and whether the input collection is a map. - “reducef” is a mandatory argument. It must be a function supporting at least two arguments (and a zero argument call when “combinef” isn’t provided). The two arguments call implements the canonical reduce contract receiving an accumulator and the current element. The zero arguments call is used to establish the seed for the result, similarly to the “init” argument in reduce. When no “combinef” is provided, the 0-arity is invoked once for each chunk to establish the seed for the reduction. “reducef” is also used in place of “combinef” when the combination function isn’t provided. In this case “reducef” must be associative, as the chunks can be re-combined in any order. - “combinef” is optional and when present it must allow a zero and a two-arguments call. “combinef” needs to be associative to allow chunks to be combined in any order. The two-argument call is used to concatenate chunks back into the final result. When “combinef” is present, “reducef” zero arity is never called and “combinef” is called instead. - “n” is the approximate size of the chunk the input collection “coll” is split into. The default’s 512. - “coll” can be of any sequential type, empty or nil. If “coll” isn’t a vector, hash-mapor clojure.core.reducers.Catobject (see r/foldcatto know more), foldfalls back on sequential reduceinstead of going parallel. When “coll” is a hash-mapboth “reducef” and “combinef” are invoked with three arguments instead of two, as per “reduce-kv” contract. NOTABLE EXCEPTIONS IllegalArgumentExceptionis raised for the few unsupported collections types. This could happen for example when “coll” is a transient or a popular Java collection like java.util.HashMap. Good reasons are there to exclude thread-unsafe mutable collections that are subject to concurrency otherwise. Other thread-safe Java collections (like java.util.concurrent.ConcurrentHashMap) could be made “foldable” as we’re going to explore in the extended example). OUTPUT - returns the result of invoking (reducef)or (combinef)with no arguments when “coll” is nilor contains one element. - returns the result of applying “reducef” to the next item in the collection. Then again “reducef” applied to the previous result and the next item, up to the last item in the collection. If “combinef” is present, then the partial accumulations are merged back using “combinef”. The last result of applying “reducef” (or “combinef”) is returned. Examples fold enables parallelism on top of the reduce-combine model. Many types of computations benefit from (or they can be adapted to) fold-like operations and reduce based data pipelines are a good candidate. In that example, we used a sequential count-occurrences function to count the frequency of words in a large text. We could rewrite the example to use fold like this: (require '[clojure.core.reducers :as r]) (defn count-occurrences [coll] (r/fold (r/monoid #(merge-with + %1 %2) (constantly {})) ❶ (fn [m [k cnt]] (assoc m k (+ cnt (get m k 0)))) ❷ (r/map #(vector % 1) (into [] coll)))) ❸ (defn word-count [s] (count-occurrences (.split #"\s+" s))) (def war-and-peace "") (def book (slurp war-and-peace)) (def freqs (word-count book)) (freqs "Andrew") ;; 700 ❶ <r/monoid is a helper function to create a function suitable for “combinef”. The first argument for r/monoid is the merge function to use when to pieces are combined together. We want to sum the counts for the same word, something we can do with “merge-with”. ❷ “reducef” needs to assoc every word to the results map “m”. Two cases are possible: the word already exists and the count gets incremented or the word doesn’t exist and zero is used as the initial count. ❸ “coll” needs to be a vector to ensure the input is transformed with into. The transformation of each line includes the creation of a tuple (vector of two items) with the word and the number one. We use r/map from the reducers library for this, and the transformation is deferred to parallel execution. fold also works natively on maps. We could use freqs produced before as a new input for another fold operation. We could for example see the relationship between the first letter of a word and its frequency in the book. The following example groups words by their initial letter and then calculates their average frequency. This operation is a good candidate for parallel fold, because the input contains thousands of keys (one for each word found in the input text): (defn group-by-initial [freqs] ❶ (r/fold (r/monoid #(merge-with into %1 %2) (constantly {})) ❷ (fn [m k v] ❸ (let [c (Character/toLowerCase (first k))] (assoc m c (conj (get m c []) v)))) freqs)) (defn update-vals [m f] ❹ (reduce-kv (fn [m k v] (assoc m k (f v))) {} m)) (defn avg-by-initial [by-initial] ❺ (update-vals by-initial #(/ (reduce + 0. %) (count %)))) (defn most-frequent-by-initial [freqs] ❻ (->> freqs group-by-initial avg-by-initial (sort-by second >) (take 5))) (most-frequent-by-initial freqs) ❼ ;; ([\t 41.06891634980989] ;; [\o 33.68537074148296] ;; [\h 28.92705882352941] ;; [\w 26.61111111111111] ;; [\a 26.54355400696864]) ❶ group-by-initial uses fold expecting a hash-map from strings to numbers. The output is a much smaller map from letters to vectors. Number of keys in this map, is equal to the number of letters in the alphabet (assuming the text is large enough and we filtered out numbers and symbols). The letter “a” in this map contains something like [700, 389, 23, 33, 44] which are the occurrences of each word in the book starting with the letter “a”. ❷ The combining function is assembled using r/monoid. The initial value for each reducing operation is the empty map {}. Partial results are combined together by key merging their vector values together into a single vector. ❸ The reducing function takes three parameters: a map of partial results “m”, the current key “k” and the current value “v”. Similarly, to count word frequencies, we fetch a potentially existent key (using an empty vector as default value) and conj that into the vector of values “v”. The key is the initial letter of each word found in the input map. ❹ update-vals takes a map and a function “f” of one parameter. It then applies “f” to every value in the map using “reduce-kv.” ❺ avg-by-initial replace each vector value in a map with the average of the numbers found in it. ❻ most-frequent-by-initial orchestrates the functions seen this far to extract the top-most frequent words by initial. ❼ freqs is the result of the word count from the previously in the example. After running most-frequent-by-initial we can see that the letter “t” is on average the most used at the beginning of a word, closely followed by “o”, “h”, “w” and “a”. This indicates that words starting with the letter “t” are on average the most repeated throughout my book (although some other word not starting with “t” might be, on absolute, the most frequent). Creating your own fold fold is a protocol-based extensible mechanism. Most of the Clojure collections provide a basic sequential folding mechanism based on reduce with the exception of vectors, maps, and foldcat objects which are equipped with a parallel reduce-and-combine algorithm. Classes like java.util.HashMap don’t have a proper fold and there are good reasons connected to the danger of exposing a mutable data structure to potentially parallel threads. Other thread-safe classes like java.util.concurrent.ConcurrentHashMap could be extended to be foldable, which is the subject of the following section. What we’re going to do for java.util.concurrent.ConcurrentHashMap could be easily extended to completely custom collections (provided they’re equipped for concurrent access). To drive our example, let’s use a large ConcurrentHashMap of integers (keys) into integers (values) and some expensive function to apply to all the keys. A trivial transformation on values like inc or str is probably overkill for fold parallelism and we could use the Leibniz formula to approximate “Pi.” We’d like to execute the transformation on each key in parallel. The design of the parallel execution is as follow: we don’t split the map, we split the keys into chunks. Values corresponding to each key partition are transformed in parallel by separate threads. No clashing normally happens, but fork-join is a work stealing algorithm and a partition can be routed to a thread where another partition was assigned, generating an overlap. This is the reason why we need java.util.concurrent.ConcurrentHashMap instead of a plain java.util.HashMap. (import 'java.util.concurrent.ConcurrentHashMap) (require '[clojure.core.reducers :as r]) (defn pi [n] ❶ "Pi Leibniz formula approx." (->> (range) (filter odd?) (take n) (map / (cycle [1 -1])) (reduce +) (* 4.0))) (defn large-map [i j] ❷ (into {} (map vector (range i) (repeat j)))) (defn combinef [init] ❸ (fn ([] init) ([m _] m))) (defn reducef [^java.util.Map m k] ❹ (doto m (.put k (pi (.get m k))))) (def a-large-map (ConcurrentHashMap. (large-map 100000 100))) (dorun ❺ (r/fold (combinef a-large-map) reducef a-large-map)) ;; IllegalArgumentException No implementation of method: :kv-reduce ❶ pi calculates an approximation of the π value. The greater the number “n” the better the approximation. Relatively small numbers in the order of the hundreds generate an expensive computation. ❷ large-map serves the purpose of creating a large ConcurrentHashMap to be used in our example. The map keys are increasing integers although the values are always the same. ❸ combinef with no arguments returns the base map, the one all threads should update concurrently. Concatenation isn’t needed as the updates happen on the same mutable ConcurrentHashMap instance. ’Combinef ‘with two arguments returns one of the two (they’re the same object). combinef could be effectively replaced by (constantly m). ❹ reducef replaces an existing key with the calculated “pi.” Note the use of “doto” that allows Java operations like .put, which otherwise returns nil to the map. ❺ fold is unsuccessful, as it searches for a suitable implementation of reduce-kv which isn’t found. We’re facing the first problem: fold fails because two polymorphic dispatches are missing: fold doesn’t have a specific parallel version for java.util.concurrent.ConcurrentHashMap, and it routes the call to reduce-kv. reduce-kv also fails because there’s an implementation for Clojure hash-map but not for Java ConcurrentHashMap. As a first step, we could provide a reduce-kv version which removes the error, but this solution isn’t enough to run the transformations in parallel: (extend-protocol ❶ clojure.core.protocols/IKVReduce java.util.concurrent.ConcurrentHashMap (kv-reduce [m f _] (reduce (fn [amap [k v]] (f amap k)) m m))) (time ❷ (dorun (r/fold (combinef a-large-map) reducef a-large-map))) ;; "Elapsed time: 41113.49182 msecs" (.get a-large-map 8190) ❸ ;; 3.131592903558553 ❶ We can add a type to a protocol by using extend-protocol. Our reduce-kv doesn’t need the value because we’re mutating the Java ConcurrentHashMap in place. ❷ fold now runs correctly. We need dorun to prevent the map to be printed on screen. We also printed a reasonably good estimate of the time elapsed for the operation to finish, which is above forty secs. ❸ To be sure that a-large-map has effectively been updated, we check the random key “8190”. It contains an approximation of “pi”, as expected. Although we provided a suitable reduce-kv implementation, java.util.concurrent.ConcurrentHashMap doesn’t have a proper parallel fold yet. Similarly to reduce-kv, we need to provide a fold implementation by extending the correct protocol. The idea is to split the key set instead of the map and each thread operates in parallel to process the given subset: (defn foldmap [m n combinef reducef] ❶ (#'r/foldvec (into [] (keys m)) n combinef reducef)) (extend-protocol r/CollFold ❷ java.util.concurrent.ConcurrentHashMap (coll-fold [m n combinef reducef] (foldmap m n combinef reducef))) (def a-large-map (ConcurrentHashMap. (large-map 100000 100))) (time ❸ (dorun (into {} (r/fold (combinef a-large-map) reducef a-large-map)))) "Elapsed time: 430.96208 msecs" ❶ foldmap implements the parallel strategy for java.util.concurrent.ConcurrentHashMap. It delegates to foldvec in reducers namespace with the keys coming from the map, effectively reusing vectors parallelism. ❷ We instruct CollFold protocol to use foldmap when a fold is presented with a java.util.concurrent.HashMap instance. ❸ After recreating the large map (remember how it’s mutated after each execution) we try fold again resulting in the expected performance boost (from over forty seconds for the sequential case down to 430 millisecond). We also take care of transforming the ConcurrentHashMap returned by fold back into a persistent data structure for later use. After extending CollFold protocol from the clojure.core.reducers namespace, we can see that fold effectively runs the update of the map in parallel, cutting the execution time consistently. As a comparison, this is the same operation performed on a persistent hash-map which is parallel enabled by default: (def a-large-map (large-map 100000 100)) (time (dorun (r/fold (r/monoid merge (constantly {})) (fn [m k v] (assoc m k (pi v))) a-large-map))) ;; "Elapsed time: 17977.183154 msecs" ❶ ❶ We can see that despite Clojure hash-map is parallel enabled, the fact that it’s a persistent data structure is playing against fast concurrent updates. This isn’t a weakness in Clojure data structure as they’re designed with a completely different goal in mind. See Also pmap concurrency. fold, on the other hand, allows a free worker to help a busy one dealing with a longer than expected request. As a rule of thumb, prefer pmap to enable lazy processing on predictable tasks, but use fold in less predictable scenarios where laziness is less important. Performance Considerations and Implementation Details => O(n) linear fold is implemented to recursively split a collection into chunks and send them to the fork-join framework, effectively building a tree in O(log n) passes. However each chunk is subject to a linear reduce that dominates the logarithmic traversal: the bigger the initial collection, the more calls to the reducing function, making it a linear behavior overall. Linearity of fold is unlikely to be important in performance analysis, as other factors like the parallel execution of computational intensive tasks come into place. Orchestration of parallel threads has a cost that should be taken into account when executing operations in parallel: like pmap, fold performs optimally for non-trivial transformations on potentially large dataset. The following simple operation for example, results in a performance degradation when executing in parallel: (require '[criterium.core :refer [quick-bench]]) (require '[clojure.core.reducers :as r]) (let [not-so-big-data (into [] (range 1000))] (quick-bench (reduce + not-so-big-data))) ;; Execution time mean : 11.481952 µs (let [not-so-big-data (into [] (range 1000))] (quick-bench (r/fold + not-so-big-data))) ;; Execution time mean : 32.683242 µs As the collection gets bigger, the computation more complicated and the available cores increase, fold starts to outperform a similar sequential operation. But the potential performance boost is still not enough to grant the need for a fold, because other variables come into place such as memory requirements. fold is designed to be an eager operation, as the chunks of input are further segmented by each worker to allow an effective work-steal algorithm. fold operations like the examples in this article need to load the entire dataset in memory before starting execution (or as part of the execution). When fold produces results which are substantially smaller than the input, there are ways to prevent the entire dataset to load in memory, for example by indexing it on disk (or a database) and include in the reducing function the necessary IO to load the data. This approach is used for example in the Iota library[1] Now you have a good grasp on how the fold function works! If you’re interested in learning more about the book, check it out on liveBook here and see this slide deck. [1] The Iota library README explains how to use the library:] which scans large files to index their rows and use that as the input collection for fold.
https://freecontent.manning.com/exploring-the-fold-function/
CC-MAIN-2019-18
refinedweb
3,134
54.22
{- arch-tag: Generic Dict-Like Object Support Copyright (C) : Database.AnyDBM Copyright : Copyright (C) 2005 John Goerzen License : GNU LGPL, version 2.1 or above Maintainer : John Goerzen <jgoerzen@complete.org> Stability : provisional Portability: portable Written by John Goerzen, jgoerzen\@complete.org This module provides a generic infrastructure for supporting storage of hash-like items with String -> String mappings. It can be used for in-memory or on-disk items. -} module Database.AnyDBM (-- * The AnyDBM class AnyDBM(..), -- * AnyDBM utilities mapA, strFromA, strToA ) where import Prelude hiding (lookup) import System.IO import Data.HashTable import Control.Exception import Data.List.Utils(strFromAL, strToAL) {- | The main class for items implementing this interface. People implementing this class should provide methods for: * 'closeA' (unless you have no persistent storage) * 'flushA' (unless you have no persistent storage) * 'insertA' * 'deleteA' * 'lookupA' * either 'toListA' or 'keysA' -} class AnyDBM a where {- |. -} closeA :: a -> IO () {- | Flush the object, saving any un-saved data to disk but not closing it. Called automatically by 'closeA'. -} flushA :: a -> IO () {- | Insert the given data into the map. Existing data with the same key will be overwritten. -} insertA :: a -- ^ AnyDBM object -> String -- ^ Key -> String -- ^ Value -> IO () {- | Delete the data referenced by the given key. It is not an error if the key does not exist. -} deleteA :: a -> String -> IO () {- | True if the given key is present. -} hasKeyA :: a -> String -> IO Bool {- | Find the data referenced by the given key. -} lookupA :: a -> String -> IO (Maybe String) {- | Look up the data and raise an exception if the key does not exist. The exception raised is PatternMatchFail, and the string accompanying it is the key that was looked up.-} forceLookupA :: a -> String -> IO String {- | Call 'insertA' on each pair in the given association list, adding them to the map. -} insertListA :: a -> [(String, String)] -> IO () {- | Return a representation of the content of the map as a list. -} toListA :: a -> IO [(String, String)] {- | Returns a list of keys in the 'AnyDBM' object. -} keysA :: a -> IO [String] {- | Returns a list of values in the 'AnyDBM' object. -} valuesA :: a -> IO [String] valuesA h = do l <- toListA h return $ map snd l keysA h = do l <- toListA h return $ map fst l toListA h = let conv k = do v <- forceLookupA h k return (k, v) in do k <- keysA h mapM conv k forceLookupA h key = do x <- lookupA h key case x of Just y -> return y Nothing -> throwIO $ PatternMatchFail key insertListA h [] = return () insertListA h ((key, val):xs) = do insertA h key val insertListA h xs hasKeyA h k = do l <- lookupA h k case l of Nothing -> return False Just _ -> return True closeA h = flushA h flushA h = return () {- | Similar to MapM, but for 'AnyDBM' objects. -} mapA :: AnyDBM a => a -> ((String, String) -> IO b) -> IO [b] mapA h func = do l <- toListA h mapM func l {- | Similar to 'Data.List.Utils.strToAL' -- load a string representation into the AnyDBM. You must supply an existing AnyDBM object; the items loaded from the string will be added to it. -} strToA :: AnyDBM a => a -> String -> IO () strToA h s = insertListA h (strToAL s) {- | Similar to 'Data.List.Utils.strFromAL' -- get a string representation of the entire AnyDBM. -} strFromA :: AnyDBM a => a -> IO String strFromA h = do l <- toListA h return (strFromAL l) instance AnyDBM (HashTable String String) where insertA h k v = do delete h k insert h k v deleteA = delete lookupA = lookup toListA = toList
http://hackage.haskell.org/package/anydbm-1.0.5/docs/src/Database-AnyDBM.html
CC-MAIN-2017-04
refinedweb
565
63.8
Using url tag and reverse() function, you can avoid hard-coding urls in template html files and views. So that even if the url path changes, it has no effect on templates and views source code. In fact, in the template view, if you want to get the currently accessed url, it’s more convenient to use request.path or request.get_full_path(). Of course, if you want to use request object in templates, you should include ‘django.core.context_processors.request’ in the settings configuration item TEMPLATE_CONTEXT_PROCESSORS. In the beginning, when develop applications with django, the url address was hard-coded in urls.py, views.py and template html files. This raises a problem that if you change the url address path of a page in urls.py, then everywhere use that page url path (views.py and template html files ) needs to be changed. If it’s a big project, there’s a lot of work to be done. 1. How To Use Django url Tag In Template Html Files. But Django itself provides a method to avoid above issue, that is to use template url tag, the url tag is contained in django.conf.urls module. With url tag, no matter how the url address path in url patterns changes ( defined in urls.py ), the address source code in template html files do not need to be changed. 1.1 Not Use url Tag In Template Html File. For example, when the url tag is not used in template html file, you can define url patterns for the home page url address like below. urlpatterns = patterns('', (r'^home$','get_home_index' ), ) Below is the html content in template files. <a href="/home">Home Page</a> And generally every page in the website should has a link to the home page, so there are so many home page link ( /home), but one day if you want to change the home page link to other like /index in urls.py file. urlpatterns = patterns('', (r'^index$','get_home_index' ), ) You will find it is too difficult, you need to change almost all pages <a href=”/home”>Home Page</a> to <a href=”/index”>Home Page</a> 1.2 Use url Tag In Template Html File. With url tag, things are a lot different. If url tag is used in the template, you should add name parameter to the url in url patterns definition file urls.py like below. urlpatterns = patterns('', url(r'^home$','get_home_index' ,name="home_index"), ) You should also change the html source code in template html file like below. <a href="{%url 'app_name:home_index'%}">Home Page</a> The app_name is the name of the app where the url resides in, and home_index is the name parameter’s value in url pattern definition url(r’^home$’,’get_home_index’, name= “home_index”). In this way, no matter how do you modify the address path of urlpatterns, the generated template url link will change with them automatically, but the template url tag source code do not need to change, which will save a lot of time. Please note the url pattern’s url definition’s name parameter’s value is unique globally. This is because of web page url link should be unique globally in one website. 1.3 How To Include Parameters In Url Tag. When urlpatterns address path contain parameters such as below. (r'^(?P<year>\d{4})/(?P<month>\d{1,2})/$','get_news_list' ), There are two parameters, the final page url address like. If you want to use url tag in template html files, you should do following changes. - Add a name to this url pattern. url(r'^(?P<year>\d{4})/(?P<month>\d{1,2})/$','get_news_list',name="news_archive" ), - Add parameter value after url tag in html template file. <a href="{% url 'app_name:news_archive' 2019 03 %}">2019/03</a> - You can also specify parameter value with parameter name like below. <a href="{%url 'app_name:news_archive' year=2019 month=03%}">2019/02</a> - Do not forget add above two parameters in view function news_list in views.py file. The parameters are separated by comma. def news_list(request,year,month): print 'year:',year print 'monty:',month ...... 2. How To Use reverse Function In Views. Using url tag in templates is very simple, but what about using urls in views? In the past, when reverse function were not used, HttpResponseRedirect (“/home”) is used to point to an address. But when urlpatterns change address path, all views HttpResponseRedirect function’s arguments value will have to change accordingly. With django.ulrs.reverse function, you can create the HttpResponseRedirect object like this HttpResponseRedirect (reverse (“home_index”)), the benefit of this is same as use url tag in template html files. 2.1 How To Include Parameters In reverse Function. To generate url with parameters use reverse function, you can do in below two ways. - Pass parameter name and value in a dictionary object, the dictionary object’s key is parameter name, the dictionary object’s value is parameter value. from django.urls import reverse ...... reverse("app_name:news_archive",kwargs={"year":2019,"month":03}) - Pass parameter values in a list, the list item order is same as url pattern’s parameter order. from django.urls import reverse ...... reverse("app_name:news_archive",args=[2019,03])
https://www.code-learner.com/django-url-tag-and-reverse-function-example/
CC-MAIN-2020-40
refinedweb
868
64.81
I’ve recently finished the User accessible login reports project. After the initial roll-out to users I had a few reports of people getting server errors when certain sets of data were viewed. This website is written in Python and uses the Django framework. During the template processing stage we were getting error messages like the following: DjangoUnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 30: invalid continuation byte. It appears that not all data coming from the whois service is encoded in the same way (see RFC 3912 for a discussion of the issue). In this case it was using a latin1 encoding but whois is quite an old service which has no support for declaring the content encoding used so we can never know what we are going to have to handle in advance. A bit of searching around revealed the chardet module which can be used to automatically detect the encoding used in a string. So, I just added the following code and the problem was solved. import chardet enc = chardet.detect(val)['encoding'] if enc != 'utf-8': val = val.decode(enc) val = val.encode('ascii','replace') The final result is that I am guaranteed to have the string from whois as an ascii string with any unsupported characters replaced by a question mark (?). It’s not a perfect representation but it is web safe and is good enough for my needs.
http://blog.inf.ed.ac.uk/squinney/tag/python/
CC-MAIN-2019-22
refinedweb
238
63.59
Please Help! I have spent two days trying to solve this problem and I'm sure it's going to be something simple in the end but right now I need to get it done so any help would be appreciated. I have created a page like the recipe example in your tutorial but mine contains a dropdown list and a gallery. The dropdown list is now working and displaying the list from one dataset (members) but the gallery will not display anything at all. I have filtered the second dataset (images) with the field from the main dataset which populates the dropdown list. I have even used reference fields to connect the two databases and I have synced the databases. Neither in preview or published modes will the gallery display anything. Any ideas? Hi, can you please share your site? Sorry, but it's not live yet but this is the code I've been trying based on tutorials I hve followed: $w.onReady(function () { }); export function memberDropdown_change(event, $w) { $w("#memberimagesdataset").setFilter(wixData.filter() .eq("#memberName", $w("#memberDropdown").value)); $w("#repeater1").show(); } then I tried this: import wixData from 'wix-data'; $w.onReady(function () { $w('#memberimagesdataset').onReady(function () { searchMembers(); }); }); export function memberDropdown_change() { searchMembers(); } function searchMembers() { wixData.query('MemberImages').eq('memberName', $w("#memberDropdown").value) .find() .then(res => { $w('#repeater1').data = res.items; $w('#repeater1').show }); } Hope you can help. Thanks. In your first solution, looks like there's a typo and the equality condition should use "memberName" instead of "#memberName". If that doesn't work, we need the full context to understand better. You can post your editor URL even if your site is not live yet, only Wix Support can open it. Hi Tomer. Thanks for looking at this for me. I'm afraid I've gone right back to the drawing board and I'm testing this out step by step. I wonder if you could help me understand what I'm doing please. This is a test page with a dropdown list populated from the Members db. In preview the dropdown list appears to work but when I try to place the selected item into a variable and view it all I get is Array[0]. So... I'm thinking that this is the first step to correct - what should I do? This is the database: Thanks for your help. Remove the $w wrapping selectedMem and newMem. $w is used to select elements, you don't need it here.
https://www.wix.com/corvid/forum/community-discussion/displaying-images-in-a-gallery-based-on-user-selection-from-a-dropdown-list
CC-MAIN-2019-47
refinedweb
413
64.3
Problem: Write a program that shuffles a list. Do so without using more than a constant amount of extra space and linear time in the size of the list. Solution: (in Python) import random random.seed() def shuffle(myList): n = len(myList) for i in xrange(0, n): j = random.randint(i, n-1) # randint is inclusive myList[i], myList[j] = myList[j], myList[i] Discussion: Using a computer to shuffle a deck of cards is nontrivial at first glance for the following reasons. First, computers are perfect. One can’t “haphazardly” spread the cards on a table and mix them around for a while. Neither can it use a “riffle” technique until it’s satisfied the cards are random enough. These are all human characteristics which have inherent sloppy flaws caused by our limited-precision dexterity. Another difficulty is that we have to give a mathematical guarantee that the resulting distribution of shuffles is uniform. It certainly isn’t when humans shuffle cards, so this adds a new level of difficulty. We note that there are many gambling companies whose integrity is based on the validity of their shuffling algorithms (and hence, the fairness of their games). But despite the simplicity of our algorithm above, there are still some companies who got it wrong. So we need to take a close look at the right way to solve this problem. Before we get there, we note that this problem generalizes to constructing random permutations. While it’s easier to understand a problem based on a deck of cards, generating random permutations is really the useful thing we’re getting out of this. This page gives an example of how not to shuffle cards. We will derive the correct way. If we have a list of elements, and a good shuffling algorithm, then each element has a uniform probability of to end up in the first position in the list. Once we’ve chosen such an element, we can recursively operate with the remaining elements, and randomly choose which element goes in the second spot, where each has a chance of to get there. Note that this means for the first stage, we pick a random integer uniformly between 1 and , and in the second stage we pick a integer between 2 and . Inductively, if we have already processed the first cards, then we need to pick a random integer uniformly between and to decide which of the remaining cards goes in the -th spot. Note that subtracting 1 from all of these randomly chosen numbers gives us the right indices. We make one further note: the order of the unprocessed cards it totally irrelevant. That means that if, say, during the first stage we want the 5th element to go in the first spot, we can simply swap the fifth and first element in the list. Since we’re picking uniformly distributed numbers, we still have an equal chance to pick any one of the remaining cards in later stages. And of course, sometimes we will be swapping an element with itself, which is the same as not swapping at all. Taking all of this into consideration, we have the following pseudocode: on input list L: for i in range(0, n-1) inclusive: j = random(i, n-1) inclusive swap L[i], L[j] As we showed above, this pseudocode translates quite nicely to Python, and it obviously satisfies the requirements of not using a lot of extra space and running in linear time; we only visit each position in the list once, and swaps take constant time and all the swaps combined only use constant space. On the other hand, implementations in functional languages are a bit more difficult, and if the language is purely functional, it can’t be done “in place.” I’d usually be the last one to admit functional languages aren’t the best tool for every job, but there you have it. Nice article! However, if myList is a list (as opposed to an array), each access in the inner loop is linear in the length of the list, and resulting shuffle seems quadratic in the length of myList. You’re right, but that is an implementation detail. Nothing about the idea of a list makes accessing elements inherently linear, and we chose Python in part for its nice lists (accessing is O(1) in Python). In fact, the “lists” of languages like Java and C++ are implemented as arrays themselves (ArrayList and vector, respectively), so “list” is a more general term. The linear access time for a linked list (say, in the style of Racket or Haskell) does make it unsuitable for this algorithm, and that’s why more advanced techniques are required. If I recall correctly, the Haskell implementation I linked to above first transformed the input list into a binary tree (which already brings it up to O(n log n)). I’d be interested to see any better functional implementations, but I highly doubt they’ll be any simpler than this swap-based solution. No, you’re right. This is the solution I have used for decades — in fact, I first implemented it in 6502 assembly language! I guess I just think too much like a lisper (schemer) these days, and I didn’t know that Python lists were array-based. There’s only pride to have in thinking like a schemer :)
http://jeremykun.com/2012/03/18/in-place-uniform-shuffle/
CC-MAIN-2015-40
refinedweb
904
58.52
> opengpssim.zip > getopt.c /* ****************************************************************** * * * OPTIONS * * * * -------------------------------------------------------------- * * * * Modul: getopt.c * * * * Autor: gnu * * * * Datum: 19.09.91 * * * * -------------------------------------------------------------- * * * * Purpose: process options * * * ****************************************************************** */ /* *************************** changes ****************************** 01.01.92 - ****************************************************************** */ /* --------------------------- includes ----------------------------- */ /* Getopt for GNU. Copyright (C) 1987, 1989 defined(__STDC__) || defined(__TURBOC__) || defined(VAXC) #define STDC_HEADERS #define CONST const #else #define CONST #endif /*_OPTION_ORDER disables permutation. Then the behavior is completely standard. GNU application programs can use a third alternative mode in which they can distinguish the relative order of options and other arguments. */ #include /* If compiled with GNU C, use the built-in alloca */ #ifdef __GNUC__ #define alloca __builtin_alloca #else /* not __GNUC__ */ #ifdef sparc #include #else char *alloca (); #endif #endif /* not __GNUC__ */ #if defined(STDC_HEADERS) || defined(__GNU_LIBRARY__) #include #include #define bcopy(s, d, n) memcpy ((d), (s), (n)) #define index strchr #else #ifdef USG #define bcopy(s, d, n) memcpy ((d), (s), (n)) #define index strchr #endif char *getenv (); char *index (); char *malloc (); #endif extern unsigned char ProgramName[]; /*; /* Describe how to deal with options that follow non-option ARGV-elements. If the caller did not specify anything, the default is REQUIRE_ORDER if the environment variable _POSIX_OPTION_ORDER is defined, PERMUTE otherwise. REQUIRE_ORDER means don't recognize them as options. Stop option processing when the first non-option is seen. This is what Unix does. PERMUTE is the default. We permute the contents of ARGV as we scan, so that eventually all the one. Using `-' as the first character of the list of option characters requests; /* Describe the long-named options requested by the application. _GETOPT_LONG_OPTIONS is a vector of `struct option' terminated by an element containing a name which is zero. The field `has_arg' is 1 if the option takes an argument, 2 if it takes an optional argument. */ struct option { char *name; int has_arg; int *flag; int val; }; CONST struct option *_getopt_long_options; int _getopt_long_only = 0; /* Index in _GETOPT_LONG_OPTIONS of the long-named option actually found. Only valid when a long-named option was found. */ int option_index; /* ( char **argv) { int nonopts_size = (last_nonopt - first_nonopt) * sizeof (char *); #if defined(__TURBOC__) || defined(VAXC) || defined(__ksr1__) || defined(__SUNOS__) char **temp = (char **) calloc( 1, nonopts_size); #else char **temp = (char **) alloca (nonopts_size); #endif /* Interchange the two blocks of data in argv. */ #if defined(__SUNOS__) bcopy ( (char*) &argv[first_nonopt], (char*) temp, nonopts_size); bcopy ( (char*) &argv[last_nonopt], (char*) &argv[first_nonopt], (optind - last_nonopt) * sizeof (char *)); bcopy ( (char*) temp, (char*) &argv[first_nonopt + optind - last_nonopt], nonopts_size); #else bcopy ( &argv[first_nonopt], temp, nonopts_size); bcopy ( &argv[last_nonopt], &argv[first_nonopt], (optind - last_nonopt) * sizeof (char *)); bcopy (temp, &argv[first_nonopt + optind - last_nonopt], nonopts_size); #endif /* `-', it requests a different method of handling the non-option ARGV-elements. See the comments about RETURN_IN. `getopt' returns 0 when it finds a long-named option. */ int getopt ( int argc, char **argv, char *optstring) { = 0; /* Determine how to handle the ordering of options and nonoptions. */ if (optstring[0] == '-') ordering = RETURN_IN_ORDER; else if (getenv ("_POSIX_OPTION_ORDER") != 0) ordering = REQUIRE_ORDER; else ordering = PERMUTE; } if (nextchar == 0 || *nextchar == 0) { if (ordering == PERMUTE) { /* If we have just processed some options following some non-options, exchange them so that the options come first. */ if (first_nonopt != last_nonopt && last_nonopt != optind) exchange (argv); else if (last_nonopt != optind) first_nonopt = optind; /* Now skip any additional non-options and extend the range of non-options previously skipped. */ while (optind < argc && (argv[optind][0] != '-' || argv[optind][1] == 0) && (_getopt_long_options == 0 || argv[optind][0] != '+' || argv[optind][1] == 0))) && (_getopt_long_options == 0 || argv[optind][0] != '+' || argv[optind][1] == 0)) { if (ordering == REQUIRE_ORDER) return EOF; optarg = argv[optind++]; return 1; } /* We have found another option-ARGV-element. Start decoding its characters. */ nextchar = argv[optind] + 1; } if (_getopt_long_options != 0 && (argv[optind][0] == '+' || (_getopt_long_only && argv[optind][0] == '-')) ) { CONST struct option *p; char *s = nextchar; int exact = 0; int ambig = 0; CONST struct option *pfound = 0; int indfound; while (*s && *s != '=') s++; /* Test all options for either exact match or abbreviated matches. */ for (p = _getopt_long_options, option_index = 0; p->name; p++, option_index++) if (!strncmp (p->name, nextchar, (size_t)(s - nextchar))) { if (s - nextchar == strlen( p->name)) { /* Exact match found. */ pfound = p; indfound = option_index; exact = 1; break; } else if (pfound == 0) { /* First nonexact match found. */ pfound = p; indfound = option_index; } else /* Second nonexact match found. */ ambig = 1; } if (ambig && !exact) { fprintf (stderr, "%s: option `%s' is ambiguous\n", ProgramName, argv[optind]); nextchar += strlen (nextchar); return '?'; } if (pfound != 0) { option_index = indfound; optind++; if (*s) { if (pfound->has_arg > 0) optarg = s + 1; else { fprintf (stderr, "%s: option `%c%s' doesn't allow an argument\n", ProgramName, argv[optind - 1][0], pfound->name); nextchar += strlen (nextchar); return '?'; } } else if (pfound->has_arg == 1) { if (optind < argc) optarg = argv[optind++]; else { fprintf (stderr, "%s: option `%s' requires an argument\n", ProgramName, argv[optind - 1]); nextchar += strlen (nextchar); return '?'; } } nextchar += strlen (nextchar); if (pfound->flag) *(pfound->flag) = pfound->val; return 0; } /* Can't find it as a long option. If this is getopt_long_only, and the option starts with '-' and is a valid short option, then interpret it as a short option. Otherwise it's an error. */ if (_getopt_long_only == 0 || argv[optind][0] == '+' || index (optstring, *nextchar) == 0) { if (opterr != 0) fprintf (stderr, "%s: unrecognized option `%c%s'\n", ProgramName, argv[optind][0], nextchar); nextchar += strlen (nextchar); return '?'; } } /* Look at and handle the next option-character. */ { char c = *nextchar++; char *temp = index (optstring, c); /* Increment `optind' when we start to process its last character. */ if (*nextchar == 0) optind++; if (temp == 0 || c == ':') { if (opterr != 0) { if (c < 040 || c >= 0177) fprintf (stderr, "%s: unrecognized option, character code 0%o\n", ProgramName, c); else fprintf (stderr, "%s: unrecognized option `-%c'\n", ProgramName, c); } return '?'; } if (temp[1] == ':') { if (temp[2] == ':') { /* This is an option that accepts an argument optionally. */ if (*nextchar != 0) { optarg = nextchar; optind++; } else optarg = 0; nextchar = 0; } != 0) fprintf (stderr, "%s: option `-%c' requires an argument\n", ProgramName, c); c = '?'; } else /* We already incremented `optind' once; increment it again when taking next ARGV-elt as argument. */ optarg = argv[optind++]; nextchar = 0; } } return c; } } #ifdef TEST /* Compile with -DTEST to make an executable for use in testing the above definition of `getopt'. */ UBYTE ProgramName[] = "getopt"; int main ( int argc, char **argv) { char c; int digit_optind = 0; while (1) { int this_option_optind = optind; if ((c = GetOpt (argc, argv, "abc:d:0123456789")) =="); } return 0; } #endif /* TEST */ /* ============================ End of File ========================= */
http://read.pudn.com/downloads100/sourcecode/comm/407312/opengpssim/lib/getopt.c__.htm
crawl-002
refinedweb
1,016
64.2
I would like to have a "Notepad++ shortcut" that will jump to a line in a file Title: I would like to have the ability to create a string that Notepad++ recognizes as a clickable link. Clicking on it would take me to the specified line in the specified file. Description: Basically, I want the ability to create a specially formatted text string that Notepad++ recognizes as a clickable shortcut. I would imbed this string into a comment. Clicking on it in Notepad++ will take me to the targeted line in the targeted file. If the file isn’t loaded, Notepad++ would load the file, go to the specified line and highlight that line. If the file is already loaded, Notepad++ would switch to that tab and change to the specified line and highlight that line. Why do I want this? I would like to insert references in my source code to code in other files. In a JavaScript file for example, I would like to put in comments like: // Function "editManItem()" is called when a user clicks an "Edit" button from // inside the HTML table of line items. // Works in combination with: // function removeManItem(id) npp://./s/^function removeManItem/+1 // function onEditSet(id) npp://UserInterface.js/s//+45 npp://./s/function removeManItem/+1 Would take me to the current file (“.”), do a regex text search for the string “function removeManItem” at the beginning of a line (“s/^function removeManItem/”) then go to next line (“+1”). npp://UserInterface.js/s//+45 Would take me to file “UserInterface.js”, not do any text search search (“s//”) which would place the cursor at the first line of the file, the go 45 lines down (“+45”), essentially going to line #45 of that file. Essentially this is a cross-reference capability to source code in another file. It’s dynamic in that it searches based on regular expressions instead of going to a specific line in the file, but (as shown in the 2nd example), it also allows going to specific line numbers. - Ekopalypse last edited by Ekopalypse It could be achieved by using a scripting plugin like pythonscript. Some simple demo code to show how to get the click notification. from Npp import editor, SCINTILLANOTIFICATION def goto(args): start = editor.wordStartPosition(args['position'], False) end = editor.getLineEndPosition(editor.lineFromPosition(args['position'])) print(editor.getTextRange(start, end)) editor.callback(goto, [SCINTILLANOTIFICATION.HOTSPOTCLICK]) but if you want to have this builtin, then you might consider open a feature request. - Alan Kilborn last edited by Alan Kilborn Just a comment. It sure would be idea if the notification itself provided the hotspot text, but sadly, no… Your method of getting the text isn’t perfect as I’m sure you know. Maybe a better way would be to find the “bracketing” whitespace around the position, as whitespace definitely seems to break up a url when embedded in a Notepad++ document? Also, won’t this script intercept the existing behavior of clicking on http urls? - Ekopalypse last edited by You are right, it is far from solving OPs issue, its purpose was just to demonstrate that it is possible. If OP wants to go that way, then I would do more like - get line of link - parse line to identify which kind of link it is - do needed action but that would mean, OP needs to define exactly what is required. Also, won’t this script intercept the existing behavior of clicking on http urls? I don’t think so, it is a notification received by npp and PS independently. - Michael Vincent last edited by Why not just use a cTags plugin? What you’re describing is exactly what cTags does, with out needing to put special formatted strings. You just run cTags on your code to generate a tags file which has all the function definitions and then use the cTags plugin to navigate to them with a shortcut key. There is cTagsView, TagLEET, NppGTags. I prefer TagLEET and use a modified version of it. cTagsView is “automatic” but only works on the currently viewed file so maybe not what you want. I have not used NppGTags, but others have mentioned it on this site. Cheers. You can instruct ctags to generate a tag for every file name with –extra=f. For TagLEET, with the following text: UserInterface.js:45 // any comment you want If your caret is in UserInterface.js, Tag Lookup will jump to line 45. This feature was intended to copy paste compilation results from terminal and jump to errors and warnings. @Michael-Vincent looking at your TagLEET modifications. - Michael Vincent last edited by @gstavi said in I would like to have a "Notepad++ shortcut" that will jump to a line in a file: @Michael-Vincent looking at your TagLEET modifications. We talked about these a while back over email. I went ahead and tried and got the added column I wanted. Since then I added a few more columns and added autocomplete ability using either the TagLEET pop-up or Notepad++ Scintilla native autocomplete - based on the info in the tag file. I like this feature since it allows me to autocomplete across a bunch of opened files in the same project (they have the same tags file), whereas the plain vanilla Notepad++ autocomplete is only within a single document. Cheers. - Alan Kilborn last edited by @Ekopalypse said in I would like to have a "Notepad++ shortcut" that will jump to a line in a file: It could be achieved by using a scripting plugin like pythonscript. Some simple demo code to show how to get the click notification. While revisiting this old thread, I see the PythonScript, but what I don’t see is how it would work in the OP’s case, as there isn’t any “hotspotted” text. @Ekopalypse can you comment? I feel like I am missing something that perhaps I once understood, but now that time has gone by, things are murky. - Ekopalypse last edited by I barely know what I ate yesterday, how am I supposed to know what I thought about something a year ago? :-D Links can be created as we see with url - based on that, I suppose I came up with this idea, but it looks like I never dug deeper since it wasn’t requested.
https://community.notepad-plus-plus.org/topic/18997/i-would-like-to-have-a-notepad-shortcut-that-will-jump-to-a-line-in-a-file/7?lang=en-US
CC-MAIN-2022-40
refinedweb
1,055
70.53
Change of type alignment and the consequences When in RSDN [1] running as follows:. After that the author dwells upon data compatibility and asks for advice how to pack data in the structure. But at the moment we are not interested in this. What we are interested in is that there is a new type of errors which can occur when porting applications on a 64-bit system. It is clear and common that when sizes of fields in a structure change, the size of the structure itself changes" [2]. I've modified it a bit for Visual Studio and got this program: #include <iostream> using namespace std; template <typename T> void print (char const* name) { cerr << name << " sizeof = " << sizeof (T) << " alignof = " << __alignof (T) << endl; } int _tmain(int, _TCHAR *[]) { print<bool> ("bool "); print<wchar_t> ("wchar_t "); print<short> ("short int "); print<int> ("int "); print<long> ("long int "); print<long long> ("long long int "); print<float> ("float "); print<double> ("double "); print<long double> ("long double "); print<void*> ("void* "); } I compared the data I'd got with the data described in the article "C++ data alignment and portability" for GNU/Linux systems and now give them in Table 1. Table 1. Types' sizes and alignment. Let's study this table. Pay attention to the marked cells relating to long int and double. These types' sizes don't depend on the architecture's size and therefore don't change. Both on 32-bit and 64-bit systems their size is 8 byte. But alignment is different for 32-bit and 64-bit systems. It can cause change of the structure's size.. References - RSDN Forum. Alignment on 64-bit architectures. - Boris Kolpackov. C++ data alignment and portability.
http://www.viva64.com/en/b/0009/
CC-MAIN-2015-18
refinedweb
283
72.05
for connected embedded systems snprintf() Write formatted output to a character array, up to a given maximum number of characters Synopsis: #include <stdio.h> int snprintf( char* buf, size_t count, const char* format, ... );(). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The snprintf() function is similar to fprintf(), except that snprintf() places the generated output (up to the specified maximum number of characters) into the character array pointed to by buf, instead of writing it to a file. The snprintf() function is similar to sprintf(), but with boundary checking. A null character is placed at the end of the generated character string. Returns: The number of characters that would have been written into the array, not counting the terminating null character, had count been large enough. It does this even if count is zero; in this case buf can be NULL. If an error occurred, snprintf() returns a negative value and sets errno. Examples: #include <stdio.h> #include <stdlib.h> /* Create temporary file names using a counter */ char namebuf[13]; int TempCount = 0; char *make_temp_name( void ) { snprintf( namebuf, 13, ; } Classification: Caveats: It's safe to call this function in a signal handler if the data isn't floating point. Be careful if you're using snprintf() to build a string one piece at a time. For example, this code: len += snprintf(&buf[len], RECSIZE - 1 - len, ...); could have a problem if snprintf() truncates the string. Without a separate test to compare len with RECSIZE, this code doesn't protect against a buffer overflow. After the call that truncates the output, len is larger than RECSIZE, and RECSIZE - 1 - len is a very large (unsigned) number; the next call generates unlimited output somewhere beyond the buffer. See also: errno, fprintf(), fwprintf(), printf(), sprintf(), swprintf(), vfprintf(), vfwprintf(), vprintf(), vsnprintf(), vsprintf(), vswprintf(), vwprintf(), wprintf()
http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/s/snprintf.html
crawl-003
refinedweb
314
63.7
Formation 8 was an American venture capital firm founded in 2011 by Joe Lonsdale, Jim Kim and Brian Koo.[1] The company was headquartered in San Francisco, California. The firm was one of the most successful venture capital firms in the industry before abruptly disbanding in November 2015.[1] History The company was founded in 2011 by three partners: Jim Kim, Brian Koo and Joe Lonsdale. The team was later joined by James Zhang of Softbank China Venture Capital and BioDiscovery and Tom Baruch, founder of CMEA Capital and Director of Intermolecular. The firm added Gideon Yu, the former Facebook Chief Financial Officer who is now president of the San Francisco 49ers, as a special adviser to the firm.[2] In 2012, Formation 8 intended to close its first round fund with $200 million, but delayed until 2013 to accommodate more limited partners.[3] In April 2013, it closed its first round fund with $448 million.[4] It recorded a net internal rate of return (IRR) of 95%, easily making it one of the best performing funds in the industry. Its notable hits included Oculus, acquired by Facebook for $2 billion, and RelateIQ, the startup bought by Salesforce.com for $390 million. In 2013, Fortune Magazine described the firm as "the hottest venture capital since Andreessen."[5] In December 2014, it closed a second fund of $500 million, and added some of the billion dollar unicorn startups to it portfolio including Illumio and South Korean mobile company Yello Mobile.[6] In November 2015, the company disbanded and the founders went their separate ways, reportedly because the founding partners had different investment strategies and interests.[7] Founders - Jim Kim - Kim was a partner at Khosla Ventures, and before that was a senior partner at CMEA.[8] - Brian Bonwoong Koo - Koo founded InnovationHub, which helped his family business LS Group, source investment opportunities. Koo also co-founded Harbor Pacific Capital, a venture capital fund which was focused on helping young technology companies navigate business development in Asia.[9] - Joe Lonsdale - Lonsdale founded Anduin Ventures, a seed fund focused on Silicon Valley.[10] Lonsdale co-founded Palantir Technologies in 2004 with Peter Thiel, Alex Karp, Stephen Cohen, and Nathan Gettings.[11] Portfolio companies - Aka Study - Augury[12] - Blend Labs - Context Relevant - Foro Energy - Hyperdyadic - LearnSprout - Oculus - OpenGov - RelateIQ - Venturebeat - Wish (ContextLogic) - Yello Mobile References - ^ a b "A partner at top Silicon Valley firm Formation 8 explains why the VC dream team just broke up". businessinsider.com. 2015-11-14. Retrieved 2017-08-04. - ^ "New VC fund Formation 8 has $450M, star partners". Silicon Valley Business Journal. April 12, 2013. Retrieved November 14, 2018. - ^ "Formation 8 Raises Its First Fund Of $448M To Plug Silicon Valley Startups Into Asian Conglomerates". TechCrunch. April 18, 2013. Retrieved November 14, 2018. - ^ "SEC Form D". SEC. Retrieved November 14, 2018. - ^ "The hottest VCs since Andreessen". CNN. April 11, 2013. Retrieved November 14, 2018. - ^ Ryan Lawler (December 3, 2014). "Formation 8 Closes Its $500 Million Second Fund". TechCrunch. - ^ "What Caused Formation 8 To Split, and What Comes Next?". fortune.com. 2015-10-11. Retrieved 2017-08-04. - ^ "James Kim". Crunchbase. Retrieved November 14, 2018. - ^ "Archived copy". Archived from the original on 2013-05-29. Retrieved 2013-08-13.CS1 maint: archived copy as title (link) - ^ - ^ Drew Olanoff (May 1, 2013). "Joe Lonsdale Of Formation 8 Sees Government, Finance, Healthcare, Energy And Logistics As Ripe Areas For Disruption". Tech Crunch. Retrieved February 28, 2017. - ^ External links
https://wiki2.org/en/Formation_8
CC-MAIN-2021-10
refinedweb
576
57.67
Posted by Zafir Anjum on August 6th, 1998 There are some constraints that have to be adhered to. Since the tree view control computes the height of the items ( all items have the same height ) based on the window font, if we change the font size we can only decrease it so that the text does not overlap with the other labels. Also, the tree view control, automatically manages the horizontal scrollbar, so it is better to maintain the width of the label. Step 1: Add member variable to track font and colorSince the control has no support for item font or color, we have to track this information within our program. We use a CMap object to associate these properties with the tree items. The map will contain information for only those items that we explicitly change. We define a nested structure that is used with the CMap object, to hold the color and the font information. protected: struct Color_Font { COLORREF color; LOGFONT logfont; }; CMap< void*, void*, Color_Font, Color_Font& > m_mapColorFont ; Step 2: Add helper functions to get/set font and colorDefine the helper functions to get or set the item font or color. To set the font, we actually pass the logfont rather than a font handle. Also note that we have defined a pair of functions to get and set the bold attribute. There are two reasons for providing a separate function for the bold attribute although we can use the font function. The first reason is that the tree view control directly supports setting an item to bold. Secondly, using the built in support also maintains the proper setting for the horizontal scrollbar. void CTreeCtrlX::SetItemFont(HTREEITEM hItem, LOGFONT& logfont) { Color_Font cf; if( !m_mapColorFont.Lookup( hItem, cf ) ) cf.color = (COLORREF)-1; cf.logfont = logfont; m_mapColorFont[hItem] = cf; } void CTreeCtrlX::SetItemBold(HTREEITEM hItem, BOOL bBold) { SetItemState( hItem, bBold ? TVIS_BOLD: 0, TVIS_BOLD ); } void CTreeCtrlX::SetItemColor(HTREEITEM hItem, COLORREF color) { Color_Font cf; if( !m_mapColorFont.Lookup( hItem, cf ) ) cf.logfont.lfFaceName[0] = '\0'; cf.color = color; m_mapColorFont[hItem] = cf; } BOOL CTreeCtrlX::GetItemFont(HTREEITEM hItem, LOGFONT * plogfont) { Color_Font cf; if( !m_mapColorFont.Lookup( hItem, cf ) ) return FALSE; if( cf.logfont.lfFaceName[0] == '\0' ) return FALSE; *plogfont = cf.logfont; return TRUE; } BOOL CTreeCtrlX::GetItemBold(HTREEITEM hItem) { return GetItemState( hItem, TVIS_BOLD ) & TVIS_BOLD; } COLORREF CTreeCtrlX::GetItemColor(HTREEITEM hItem) { // Returns (COLORREF)-1 if color was not set Color_Font cf; if( !m_mapColorFont.Lookup( hItem, cf ) ) return (COLORREF)-1; return cf.color; } Step 3: Add WM_PAINT handlerIn this function we first let the control update a memory device context. We then redraw the visible labels using the user defined attributes. We let the control handle the highlighting of items, so before we update a label we make sure that it is not selected or drophilited. Also, if the items font or color attributes were not changed, we don't need to redraw it. Once all the updates are ready in the memory device context, we blit it to the actual device context. After the default window procedure for the control has updated the device context, we scan through all the visible items and update the items that have a user defined color or font. ); HTREEITEM hItem = GetFirstVisibleItem(); int n = GetVisibleCount()+1; while( hItem && n--) { CRect rect; // Do not meddle with selected items or drop highlighted items UINT selflag = TVIS_DROPHILITED | TVIS_SELECTED; Color_Font cf; if ( !(GetItemState( hItem, selflag ) & selflag ) && m_mapColorFont.Lookup( hItem, cf )) { CFont *pFontDC; CFont fontDC; LOGFONT logfont; if( cf.logfont.lfFaceName[0] != '\0' ) { logfont = cf.logfont; } else { // No font specified, so use window font CFont *pFont = GetFont(); pFont->GetLogFont( &logfont ); } if( GetItemBold( hItem ) ) logfont.lfWeight = 700; fontDC.CreateFontIndirect( &logfont ); pFontDC = memDC.SelectObject( &fontDC ); if( cf.color != (COLORREF)-1 ) memDC.SetTextColor( cf.color ); CString sItem = GetItemText( hItem ); GetItemRect( hItem, &rect, TRUE ); memDC.SetBkColor( GetSysColor( COLOR_WINDOW ) ); memDC.TextOut( rect.left+2, rect.top+1, sItem ); memDC.SelectObject( pFontDC ); } hItem = GetNextVisibleItem( hItem ); } dc.BitBlt( rcClip.left, rcClip.top, rcClip.Width(), rcClip.Height(), &memDC, rcClip.left, rcClip.top, SRCCOPY ); } Step 4: Go ahead and change the item font or colorHere are some examples. // Change the item color to red SetItemColor( hItem, RGB(255,0,0)); // Change the item to italicized font and underlined LOGFONT logfont; CFont *pFont = GetFont(); pFont->GetLogFont( &logfont ); logfont.lfItalic = TRUE; logfont.lfUnderline = TRUE; SetItemFont(hti, logfont ); 123Posted by sunbaogang on 01/15/2009 11:42am thanksReply Here it is!Posted by appleiii on 04/12/2007 08:28am thanks a lot. this is exactly what I want.Reply Correct background colorPosted by gizmocuz on 11/19/2005 11:24pm For correct background color change this line memDC.SetBkColor( GetSysColor( COLOR_WINDOW ) ); to memDC.SetBkColor( GetBkColor() );Reply Nice and helpfulPosted by portugalec on 05/26/2005 05:34pm I found this code very helpful. Bit tricky, but works fine. Since there is no "SetItemColor" support from MFC for TreeCtrl, this solution with "drawing over item" is quite smart:-) Well done!Reply Nice and helpfulPosted by portugalec on 05/26/2005 05:32pm I found this code very helpful. Bit tricky, but works fine. Since there is no "SetItemColor" support from MFC for TreeCtrl, this solution with "drawing over item" is quite smart:-) Well done!Reply it's perfectPosted by xingshi on 08/26/2004 05:11am it's perfectReply we should add DeleteAllItems( ) for correct workingPosted by Legacy on 09/05/2003 07:00am Originally posted by: talai BOOL CTreeCtrlX::DeleteAllItems() { m_mapColorFont.RemoveAll(); return CTreeCtrl::DeleteAllItems(); } Reply Short Circuit Problem?Posted by Legacy on 08/18/2003 07:00am Originally posted by: Ed Reply Text v-alignmentPosted by Legacy on 07/31/2002 07:00am Originally posted by: jared007 Hi, Im pretty stumped here.... I have a control, just as this example, w/ buttons and custom text fonts. How can I gain reference to the height of the button (+,-) in the control, so I can draw the text the same height as it, and vertically align it with the button. Right now, when I change the font, the text often appears vertically aligned above the button. How far it is drawn above the button depends on the font type an weight, but I would guess it ranges from 1 - 3 pixels. Better way to change color of item using Custom Draw.Posted by Legacy on 06/10/2002 07:00am Originally posted by: Rudy Kappert
https://www.codeguru.com/cpp/controls/treeview/misc-advanced/article.php/c633/Setting-color-and-font-attribute-for-individual-items.htm
CC-MAIN-2018-51
refinedweb
1,044
57.37
timers(5) timers(5) timers - timers and process time accounting information #include <sys/time.h> The timing facilities under IRIX consist of interval timers, event timing, process execution time accounting, and time of day reporting. Interval timers consist of POSIX timers (see timer_create (3c)), and itimers that were introduced in BSD4.2 (see getitimer(2)). Use of the POSIX timers is strongly recommended for new applications. The IRIXunique BSD4.2 itimers are supported only to provide compatibility for older applications. On Silicon Graphics machines there are two independent timers per processor. The first interrupts the processor at a regular interval of 10 milliseconds and is used for scheduling and statistics gathering. The second interrupts the processor at fasthz frequency and is used to support the high resolution POSIX and itimer capabilities. On multiprocessor machines, one processor is used to maintain system time and is labeled the clock processor. One additional processor is required to service the POSIX timer and itimer requests and is labeled the fast clock processor. The mpadmin(1) command can be used to bind the clock and fast clock to a particular physical processor. A realtime process (one with priority between NDPHIMAX and NDPHIMIN) may make POSIX timer requests, or itimer requests with a resolution greater than 10 milliseconds when using the realtime timer. The limit on the resolution depends on the underlying hardware and can be dynamically determined by examining the variable fasthz using systune(1M) or by opening /dev/kmem and reading fasthz as a 4 byte word. The fasthz variable can also be modified using systune(1M). On the Indigo, Indy, Indigo2, O2, and Octane products, acceptable values for fasthz are 500, 1000, and 2500 Hz. If the requested value is not one of these values, then the default value of 1000 Hz is used. On the Challenge Series, the resolution of the hardware timer is 21 nanoseconds and therefore any value smaller than 47MHz is possible. For realistic results, no fasthz value larger than 2000 Hz should be specified because the kernel cannot reliably deliver itimer signals at a greater rate. On Onyx2, Onyx3, and Origin Systems, fasthz has a value of 1250 Hz which gives a resolution of 0.8 microseconds. For processes running with either the FIFO or RR scheduling policies, both POSIX timers with CLOCK_SGI_FAST, and itimers with ITIMER_REAL have a resolution of 0.8 microseconds. This does not necessarily mean that timer interrupts can be received at that frequency, the timers simply have that resolution. It is not possible to achieve better than 10 millisecond timer accuracy when running without either a FIFO or RR scheduling policy. To take low latency timestamps with maximum resolution on Onyx2, Onyx3 and Origin Page 1 timers(5) timers(5) systems, clock_gettime (CLOCK_SGI_CYCLE) should be used. This will give a timer resolution of 0.8 microseconds. Event timing is typically used by the programmer to measure the elapsed time between events. By examining the time before and after an operation and then computing the difference, the application can calculate the elapsed time of the operation. The POSIX clock_gettime(2), the System V time(2), times(2) and Berkeley gettimeofday(3B) calls may be used to that end. IRIX also allows the user to map a hardware counter into the user address space and read it directly for low overhead time snapshots. Information about the address and rate of the hardware counter is available through the SGI_QUERY_CYCLECNTR request to the syssgi(2) system call. The mmap(2) system call can then be used to make the counter directly available to the user process. Process execution time accounting is typically measured under System V with the times(2) system call and under Berkeley with the getrusage(3) system call. Traditionally under UNIX, the time reported by these system calls is measured by the scheduling clock. On each clock tick the kernel examines the processor status word and charges the running process a tick's worth of execution time in user or system mode. The most significant drawback of this scheme is limited precision. Under IRIX, the kernel keeps track of process state transitions between user and system modes and accumulates the elapsed time between state transitions. This information is available through the times(2) and getrusage(2) system calls. System time of day can be obtained via the gettimeofday(3B) system call. On the Challenge Series, there is a 64 bit counter that is used to maintain system time. The system initializes a timebase at startup using the battery backed time of day clock and associates a counter value with that timebase. Subsequent gettimeofday() calls will return the original timebase plus the difference between the current counter value and the original startup counter value. The resolution of this 64 bit counter is 21 nanoseconds. A gettimeofday() call causes the kernel to report the current time plus the difference between the current counter value and the last snapshot value of the counter from the scheduling clock. On some other Silicon Graphics machines, there is a 64 bit data structure that is maintained by the clock processor. On every clock tick, the kernel updates that data structure by an amount equal to the clock tick (typically 10 milliseconds). A gettimeofday() call will return the current value of that structure. When timed(1M) is running, the gettimeofday() and time(2) results will be adjusted to match time of other machines running timed within a local area. On Challenge Series machines timers live on the processor where the program that created them was running at the time they were created. A timer stays connected to that processor until it expires, is disabled or the user restricts or isolates the processor with the timer. If a user restricts or isolates a processor with timers, all of the timers are moved to the processor that owns the clock as reported by sysmp(2). Page 2 timers(5) timers(5) clock_gettime(), clock_settime(), clock_getres(), getitimer(2), sysmp(2), syssgi(2), time(2), timer_create(3C), timer_delete(3C), timer_getoverrun(3C), timer_gettime(3C), times(2), getrusage(3), gettimeofday(3B) PPPPaaaaggggeeee 3333
https://nixdoc.net/man-pages/IRIX/man5/timers.5.html
CC-MAIN-2022-27
refinedweb
1,015
52.49
This chapter is a reference for the entire runtime library. As you can see, it is a big one. To help you find what you need, each header in this chapter is organized in alphabetical order. If you are not sure which header declares a particular type, macro, or other identifier, check the index. Once you find the right page, you can quickly see which header you must #include to define the identifier you need. The subsections in each header's section describe the functions, macros, classes, and other entities declared and defined in the header. The name of the subsection tells you what kind of entity is described in the subsectione.g., "terminate function," "basic_string class template," and so on. Cross references in each "See Also" heading list intrasection references first, followed by references to other headers (in this chapter) and references to keywords (in Chapter 12). The subsection for each class or class template contains descriptions of all important members. A few obvious or do-nothing members are omitted (such as most destructors) for the sake of brevity. The entire standard library resides in the std namespace, except that macros reside outside any namespace. Be sure to check the subsection name closely so you know whether an identifier is a macro or something else. To avoid cluttering the reference material, the std:: prefix is omitted from the descriptions. Examples, however, are complete and show how each namespace prefix is properly used. Some C++ headers are taken from the C standard. For example, the C standard <stdio.h> has its C++ equivalent in <cstdio>. The C++ version declares all the C names (other than macros) in the std:: namespace but reserves the same names in the global namespace, so you must not declare your own names that conflict with those of the C headers. Each C header can be used with its C name, in which case the declarations in the header are explicitly introduced into the global namespace. For example, <cstdio> declares std::printf (and many other names), and <stdio.h> does the same, but adds "using std::printf" to bring the name printf into the global namespace. This use of the C headers is deprecated. The syntax description for most macros shows the macro name as an object or function declaration. These descriptions tell you the macro's type or expected arguments. They do not reflect the macro's implementation. For macros that expand to values, read the textual description to learn whether the value is a compile-time constant. For an overview of the standard library, see Chapter 8. Chapter 9 presents the I/O portions of the library, and Chapter 10 discusses containers, iterators, and algorithms. C++ permits two kinds of library implementations: freestanding and hosted. The traditional desktop computer is a hosted environment. A hosted implementation must implement the entire standard. A freestanding implementation is free to implement a subset of the standard library. The subset must provide at least the following headers, and can optionally provide more:
http://etutorials.org/Programming/Programming+Cpp/Chapter+13.+Library+Reference/
CC-MAIN-2018-09
refinedweb
504
65.22
04 May 2010 16:00 [Source: ICIS news] LONDON (ICIS news)--Qatar Petrochemical Co’s (QAPCO) Ras Laffan Olefin Cracker (RLOC) based on ethane, in Qatar, has been inaugurated, project partner Total Petrochemicals said on Tuesday. Ethylene from the 1.3m tonne/year plant would be piped from the offshore gas field, North Field, across Qatar to Mesaieed, where it will feed Qatofin's new 450,000 tonne/year linear low density polyethylene (LLDPE) plant, which was inaugurated last November. Total said that through participations in Qapco and Qatofin, it holds 22.2% of the RLOC project, which had been hit by delays. The other partners are Qatar Petroleum and Chevron Phillips Chemical Company, the group said. “With the start-up last November of the Qatofin polyethylene plant and now with the Ras Laffan cracker, Total is further strengthening its partnership with the Qatari energy and petrochemical sector,” said Francois Cornelis, vice chairman of the executive committee of Total and president chemicals. “These major projects are considerably enhancing our position in petrochemicals, in particular in the growing markets in Asia and the ?xml:namespace> Ras Laffan is being constructed using technology from Qatofin, a joint venture between
http://www.icis.com/Articles/2010/05/04/9356147/Qatars-Ras-Laffan-Olefin-Cracker-inaugurated.html
CC-MAIN-2014-52
refinedweb
197
50.46
15 July 2008 16:20 [Source: ICIS news] SAN ANTONIO, Texas (ICIS news)--High food prices and less reserves will drive increased crop production in the near future and lead to an increase in demand for fertilizers, the chief executive of Potash Corporation of Saskatchewan (PotashCorp) said on Tuesday. Bill Doyle said the world had consumed more grain than it had produced for eight of the last 10 years and was becoming increasingly reliant on reserves. “The world needs more food and farmers are being paid higher prices to deliver more grain. This is providing the necessary incentive to increase production,” Doyle said. Meanwhile, the economies of many developing nations were allowing their citizens to pay for more nutritional diets than ever, Doyle said at the Southwestern Fertilizer Conference. Millions of people were “at a situation in their lives where they can afford to eat meat for the first time”, Doyle said. “When you see GDP [gross domestic product] growth of 10% in China over a 15-year period, year after year after year, you go from an economy where essentially no one - or just the politicians - were eating proteins, to a situation where protein demand was outstripping supply of protein,” Doyle said. Doyle predicted an increase in meat prices and said individuals in the ?xml:namespace> “People are going to eat better, they’re going to be more educated, they’re going to be more peaceful. It’s going to be a good thing for the world but we’re going to need to accept we’re going to have higher food prices,” Doyle said. In 1950, 1.3 acres of cropland existed per each person, Doyle said, while by 1990 that number had shrunk to 0.68 acres per person and was expected to continue to decline to about half an acre of cropland per person by 2020. Doyle defended the fertilizer industry against charges it was the cause of increasing global food costs, claiming that a food shortage had been developing for a number of years and that farm and fertilizer costs made up a small fraction of costs at grocery stores. “The only way to fight food inflation is to grow more food,” Doyle said. The Southwestern Fertilizer Conference is
http://www.icis.com/Articles/2008/07/15/9140455/food-demand-to-boost-fertilizers-potashcorp.html
CC-MAIN-2014-41
refinedweb
374
56.89
RSS and Atom fundamentals The Really Simple Syndication (RSS) and Atom standards provide XML structures of items for a variety of different uses. The most common use for both RSS and Atom feeds is as the data dissemination format to promote Weblogs and news sites. The RSS and Atom feeds contain relatively small amounts of information. Thus, you can easily download the files and reduce the load on the Web servers rather than supply all of the information normally distributed when the user views a full page of blog posts. In addition, the RSS and Atom files also contain more detailed classification information such as author, title, subject and keyword tagging information to help identify and organize the data within the feeds. You can see a sample of an RSS feed, here taken from my blog, in Listing 1. Listing 1. Sample RSS feed The same information, in Atom format, is in Listing 2. Listing 2. Atom simple In Table 1 is the summary of the information that you can extract from the RSS and Atom files. This lists the corresponding XML tags for each type of information. You'll need this later to parse and process the contents of these individual files. Table 1. Summary of information that you can extract from the RSS and Atom files Typically, you parse the contents of the XML files that make up the feed information and then print out that information in a format that suits you. Traditional RSS and Atom processing Before you look at the XQuery solution, you'll examine how more traditional solutions address the problem of parsing RSS and Atom files and generating output. For the purposes of the demonstration, you'll convert an RSS and Atom feed into HTML. The traditional method to process an RSS or Atom feed is to use a programming language (such as Perl, PHP or Java) and parse the full contents of the XML file. You then output the information either dynamically or into a static HTML file to display it. You can see a sample of a Perl processor in Listing 3. The script uses the XML::FeedPP module, which handles a lot of the complexity for you. The module downloads and parses the XML and returns the information as an object that you can iterate over to print out the item title and link address. Listing 3. A Perl based parser taking advantage of the XML::FeedPPmodule Running the script, you get output similar to that in Listing 4. The output is in HTML, although of course the benefit of a programming language solution is that you might have inserted the information into a database. Listing 4. The truncated output from a Perl-based RSS parser An issue with the programming solution is that processing XML is a comparatively complex process, and different implementations and languages handle the processing of XML information to different levels of ability. But most complex of all, especially for the majority of languages, is that although the markup and the programming elements are often combined in the same file, to actually follow the process can be quite complex. To make modifications to the output style and layout might be difficult and even problematic as it can require significant changes in the programming logic to achieve. Another alternative is to use an XSLT stylesheet and convert the information on the fly into HTML. An example of the XSLT, producing the same basic output as provided by the Perl script, is shown in Listing 5. Listing 5. Using an XSLT stylesheet The XSLT solution has the major benefit that you can embed the programming portion of the processing into the same file as the source of the formatting. You can see the basic structure of the document, even with the additions of the XSL statements that will parse the individual components. The downside of XSLT is that the complexity of the input XML and the complexity of the output files can lead to ever more complicated processing. Although XSLT supports basic programming notions such as loops, and even some complex data and information handling, its capabilities are very limited compared to a full programming language. That complexity can lead to some slow processing, especially on very large and complex files. If you take your example here, writing an XSL transformation that handles all the elements of both RSS and Atom feeds simultaneously would be difficult, but not impossible. But understanding the output and how it works could be very difficult. Converting RSS on the fly using XQuery XQuery combines the flexibility of the XPath specification language to extract individual elements with the ability to easily define functions, loops and programmable elements. The combination turns the simplified path processing in XPath into a more flexible way to read and manipulate the information during processing. Unlike XSLT, XQuery has a more familiar programming environment and execution model, and some strong typing that make it easier to work with the information, without having to resort to a solution based on a programming language. Start with a very simple equivalent to previous examples that outputs the information from your RSS source as a basic HTML file (Listing 6). Listing 6. A simple XQuery based RSS parser You can dissect the query as follows: - The main component is the portion in the <html> tags, this includes a call to the local rss-summary function, providing the RSS source (in this a local file, although it could be a URL). - The previously declared rss-summary function uses a for loop to iterate over each item by using the XPath specification to select each item. - For each item you call the local rss-row function, which takes the supplied link and title text and inserts this into an HTML fragment. You can execute the query with the GNU Qexo library, which provides an XQuery component: $ java -jar kawa-1.9.1.jar --xquery --main simplerss.xql. The output is basically identical to the previous examples you've seen in other solutions, so let's move on and expand on your original, basic example. Sorting the news items that you output is one of the most straightforward first steps. With a traditional solution, sorting might be difficult, if not impossible in some cases. XQuery however includes support for a number of different data types, and that means that you can sort on a variety of data within the source XML file. With news feeds, you have much potential to sort the items on different pieces of information. The typical model is to sort the items by date, so you can read the entries in chronological order. To add sorting to the output, you just need to add a line to the for, let, where, order by, and return (FLOWR) expression within the rss-summary function to order the output, as in Listing 7. Listing 7. Adding sorting to the output XQuery understands dates as they are written within RSS and Atom files and so it can automatically sort the information for you. If you want to order the items by descending date—with the newest item first—just add the descending parameter to your order by expression (Listing 8). Listing 8. Adding the descendingparameter In both the above examples that sort on a date, you used an XPath expression to refer to the individual item (in this case, an item within an RSS feed) by selecting the content of an individual tag as the sort value. Your basic system is in place to output a single RSS feed as HTML by using XQuery. Now you need to handle multiple feeds. Within your original script, the system decides which feed to process through the specification within the call to the rss-summary function: {local:rss-summary("planet.rss2.xml")}. To add more feeds, you can call the function multiple times. The planet.mcslp.com is actually an aggregation of a number of different feeds into a single blog and feed for easier display. You can duplicate this process using XQuery to merge the feeds together. Also, when you merge the feeds together, you probably want to add a title to each post to see the source of each post. Listing 9 shows a modified feed output that contains the information from two feeds. Listing 9. Displaying multiple RSS feeds (multirss.xql) Figure 1 shows the output of this process as the final rendered HTML. Figure 1. Multiple RSS feeds The problem with calling the rss-summary function multiple times with two documents sequentially is that the information isn't merged. Instead you output the information from the two feeds one after the other. To truly merge multiple feeds, the easiest method is to create an intermediary XML document that you can then parse again using XQuery to filter out the individual information. You can see an example of this in Listing 10. Listing 10. Merging multiple feeds with an intermediary document (multirss2.xql) The way the example in Listing 10 works is more complicated than the previous examples, but nonetheless quite straightforward. The example is split into four components, three functions and the main execution block, each of which has a different role play: - The buildmergerow()function accepts the feed title and individual item and creates a intermediary XML structure for each item that contains the feed title, item title, link and publication date information. - The rss-summary()function works almost as before, processing an individual feed for each of the items, but calling buildmergerow() on each item. - The rss-row()function formats an item in the quasi-RSS XML format into an HTML list item. The main block provides a list of feeds. You work through the list of feeds, processing each one, and returning the output of that process and placing it into the $merged variable. Because you assign the output of the entire for loop to the variable, the effect is that you place a list of the quasi-RSS item XML into the variable for all feeds. Once the processing has finished, the value of $merge contains all of the items from all of the RSS feeds in an XML format. Then the last for loop in that section iterates over that quasi-RSS list, sorts the items, and uses the rss-row() function to the format the information. Because you have merged all of the items from all of the feeds into the single $merged list, you can sort all of items using the same parameter (in this case, the date), and produce a properly merged list of the list in reverse chronological order. You can see the result of the process in Figure 2. Figure 2. Merged RSS feeds The previous example of merging more than one RSS feed actually provides you with the solution for how to deal with different feed types. You can use the same intermediary processing trick to parse RSS and Atom feeds into the intermediary XML format and then process that intermediary XML document to produce the information you want. In this instance, you have a few hurdles to overcome. The first issue is that Atom uses namespaces within the source XML document, so you must declare the Atom namespace to extract the information correctly. The second issue is to identify the type of document that you want to access. Although it is often clear from the name of the feed or document, you can use an if statement within XQuery to look for specific tags, and then execute the appropriate parsing function to extract the information from the file. You can see an example of the statement in the fragment in Listing 11. Listing 11. An ifstatement to identify the feed type information Listing 12 shows the full listing. This is an adaptation of the previous solution. Instead of a single function to build the intermediary document, you now have two functions, one for Atom feeds and one for RSS feeds. Like the previous solution, you now have separate functions to process the feeds (because the XPath specification for each is different), and corresponding functions to build the intermediary XML document. Listing 12. Merging different feed types (multifeed.xql) Here the script uses local copies of the files to save some time. Let's use a different XQuery processor to parse the content that doesn't include a built-in URL accessor method as the Qexo toolkit does. For example, using the Saxon XQuery processor, you can run the script like this: $ java -cp /usr/share/saxon/lib/saxon8.jar net.sf.saxon.Query multifeed.xql. Figure 3 shows the output from the feed. it should be identical to the output of Figure 2. The difference is not in what you generated, but that you use Atom and RSS feeds to generate the information. Figure 3. A merged RSS and Atom summary In this article, you looked at offers a flexible method to process XML files. Some find this method is easier to follow syntactically. Certainly some XQuery abilities, such as the flexibility to create to a single intermediary XML document that you can reparse to handle different sources and input formats, help solve some issues experienced when you process XML files. Information about download methods Learn - The RSS 1.0 specification: Read about Atom, an XML-based Web content and metadata syndication format. - RSS 2.0 Specification: Read more on this Web content syndication format and dialect of XML. All RSS files must conform to the XML 1.0 specification, as published on the World Wide Web Consortium (W3C) website. - RSS 2.0 and Atom: Compare the differences between the RSS 2.0 and Atom 1.0 syndication languages. - Introduction to Syndication, (RSS) Really Simple Syndication (Vincent Lauria, developerWorks, March 2006): Learn about RSS, Atom, and feed readers including why RSS so popular and what are its benefits? Learn what feed readers are available and which one might fit your needs. - RSS (file format): Read Wikipedia's excellent article detailing the history and differences of RSS file formats. - XQuery 1.0 specification: Learn more about this XML language that makes intelligent use of XML structure to express queries across various XML data sources. - The future of the Web is Semantic (Naveen Balani, developerWorks, October 2005): Explore the basics of Semantic Web technologies and how you can leverage ontology-based development. - XSLT: Working with XML and HTML (Khun Yee Fung, Addison-Wesley, December 2000): Try a comprehensive reference and tutorial to XSLT. - Tutorial: Process XML using XQuery (Nicholas Chase, developerWorks, March 2007): Learn more about XQuery 1.0 and how to retrieve information from an XML document stored in an XQuery-enabled database. - XSLT Functions: Check out the extensive reference from the w3school.com. - XML zone: Learn all about XML. - The technology bookstore: Browse for books on these and other technical topics. Get products and technologies - The SAXON XSLT and XQuery Processor: Get an Open Source processor to handle XQuery document processing. - The Qexo tool: Try this part of the GNU Kawa implementation that comes from GNU. - IBM trial software: Build your next development project with trial software available for download directly from developerWorks. Discuss - Participate in the discussion forum. - XML zone discussion forums: Participate in any of several XML-related discussions, including the Atom and RSS forum. - developerWorks XML zone: Share your thoughts: After you read this article, post your comments and thoughts in this forum. The XML zone editors moderate the forum and welcome your input. - developerWorks blogs: Check out these blogs and get involved in the developerWorks community. >>IMAGE.
http://www.ibm.com/developerworks/library/x-xqueryrss/
crawl-002
refinedweb
2,598
50.97
# How a PVS-Studio developer defended a bug in a checked project The PVS-Studio developers often check open-source projects and write articles about that. Sometimes, when writing an article, we come across interesting situations or epic errors. Of course, we want to write a small note about it. This is one of those cases. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/2e1/a89/b3b/2e1a89b3b31c18284ee45219df8e00da.png)### Introduction At the moment I'm writing an article about checking the [DuckStation](https://github.com/stenzek/duckstation/tree/13c5ee8bfb4f0f8fc40f76b39de58b5d9b473dc3) project. This is an emulator of the Sony PlayStation console. The project is quite interesting and actively developing. I found some interesting bugs and want to share a story about one with you. This article demonstrates: * that even experts can make mistakes. * that static analysis may save a person from making such mistakes. ### Example of an error **PVS-Studio has issued a warning**: [V726](https://pvs-studio.com/en/w/v726/) An attempt to free memory containing the 'wbuf' array by using the 'free' function. This is incorrect as 'wbuf' was created on stack. log.cpp 216 ``` template static ALWAYS_INLINE void FormatLogMessageAndPrintW(....) { .... wchar_t wbuf[512]; wchar_t* wmessage_buf = wbuf; .... if (wmessage_buf != wbuf) { std::free(wbuf); // <= } if (message_buf != buf) { std::free(message_buf); } .... } ``` In the original version of the article, I described this bug the following way: > Here the analyzer detected code with an error. In this code fragment, we see an attempt to delete an array allocated on the stack. Since the memory has not been allocated on the heap, you don't need to call any special functions like *std::free* to clear it. When the object is destroyed, the memory is cleared automatically. > > This may seem like a great error for an article — static buffer and dynamic memory release. What could have gone wrong? I'll tell you now. In our company, a developer writes an article and gives it to a more experienced teammate. They review the article and give recommendations on how to improve it. This case is no exception. Look at the comment the reviewer left after he read my article: > There's no error here. This is a false alarm; the analyzer has not mastered it. In the middle, there's a dynamic memory allocation for the message by the *malloc* function. The \*if (wmessage\_buf != wbuf) \*check is used to determine whether to call *std::free* or not. > > You're probably wondering what *malloc* is and where it came from. My bad. It's time to fix it. Take a look at the function's entire code. Above, I have already shown you this code fragment when describing the error. The reviewer inspected the same fragment when reading the article. ``` template static ALWAYS_INLINE void FormatLogMessageAndPrintW( const char* channelName, const char* functionName, LOGLEVEL level, const char* message, bool timestamp, bool ansi_color_code, bool newline, const T& callback) { char buf[512]; char* message_buf = buf; int message_len; if ((message_len = FormatLogMessageForDisplay(message_buf, sizeof(buf), channelName, functionName, level, message, timestamp, ansi_color_code, newline)) > (sizeof(buf) - 1)) { message_buf = static_cast(std::malloc(message\_len + 1)); message\_len = FormatLogMessageForDisplay(message\_buf, message\_len + 1, channelName, functionName, level, message, timestamp, ansi\_color\_code, newline); } if (message\_len <= 0) return; // Convert to UTF-16 first so unicode characters display correctly. // NT is going to do it anyway... wchar\_t wbuf[512]; wchar\_t\* wmessage\_buf = wbuf; int wmessage\_buflen = countof(wbuf) - 1; if (message\_len >= countof(wbuf)) { wmessage\_buflen = message\_len; wmessage\_buf = static\_cast (std::malloc((wmessage\_buflen + 1) \* sizeof(wchar\_t))); } wmessage\_buflen = MultiByteToWideChar(CP\_UTF8, 0, message\_buf, message\_len, wmessage\_buf, wmessage\_buflen); if (wmessage\_buflen <= 0) return; wmessage\_buf[wmessage\_buflen] = '\0'; callback(wmessage\_buf, wmessage\_buflen); if (wmessage\_buf != wbuf) { std::free(wbuf); // <= } if (message\_buf != buf) { std::free(message\_buf); } } ``` Indeed, if the message length is greater than or equal to *countof(wbuf)*, a new buffer on the heap will be created for it. You may think that this fragment looks a lot like false alarm. However, I looked at the code from the function for a minute and responded the following way: > Strongly disagree. Let's look at the code: [the buffer on the stack](https://github.com/stenzek/duckstation/blob/13c5ee8bfb4f0f8fc40f76b39de58b5d9b473dc3/src/common/log.cpp), [dynamic allocation of the new buffer on the heap](https://github.com/stenzek/duckstation/blob/13c5ee8bfb4f0f8fc40f76b39de58b5d9b473dc3/src/common/log.cpp), [releasing the wrong buffer](https://github.com/stenzek/duckstation/blob/13c5ee8bfb4f0f8fc40f76b39de58b5d9b473dc3/src/common/log.cpp). > If the string doesn't fit into the local buffer on the stack, then we put it in a dynamically allocated buffer at the *wmessage\_buf* pointer. As you see from the code, below there are 2 branches with memory release, if there was a dynamic allocation. We can check it with *wmessage\_buf != wbuf*. **However, in the first branch the wrong memory is released. That's why the warning is here.** In the second branch the right buffer [is released](https://github.com/stenzek/duckstation/blob/13c5ee8bfb4f0f8fc40f76b39de58b5d9b473dc3/src/common/log.cpp). No warnings here. > > Indeed, there's an error. The developer should have cleared the *wmessage\_buf* the same way as they did below. My teammate's response was short: > Agree. I was wrong. > > P.S. I owe you a beer. > > ### Conclusion Unfortunately, every static analyzer issues [false positives](https://pvs-studio.com/en/blog/terms/6461/). Because of this, developers question some warnings and take them as false positives. My advice: don't rush and be attentive when you inspect warnings. By the way, you can read similar entertaining articles. For example: 1. [How PVS-Studio proved to be more attentive than three and a half programmers](https://pvs-studio.com/en/blog/posts/cpp/0587/). 2. [One day in the life of PVS-Studio developer, or how I debugged diagnostic that surpassed three programmers](https://pvs-studio.com/en/blog/posts/cpp/0842/). 3. [False positives in PVS-Studio: how deep the rabbit hole goes](https://pvs-studio.com/en/blog/posts/cpp/0612/). Enjoy your reading. Come and [try PVS-Studio](https://pvs-studio.com/trial_license_en) on your projects.
https://habr.com/ru/post/586700/
null
null
1,014
51.04
I got the outdated neato_robot ROS package mostly working just by adding a timeout to its serial communications. But this only masked the symptom of an unknown problem with no understanding of why it failed. To understand what happened, I removed the timeout and add the standard Python debugging library to see where it had hung. import pdb; pdb.set_trace() I found the hang was getMotors() in neato_driver.py. It is waiting for my Neato to return all the motor parameters specified in the list xv11_motor_info. This list appears to reflect data returned by author’s Neato robot vacuum, but my Neato returns a much shorter list with only a partial overlap. Hence getMotors() waits forever for data that will never come. This is a downside of writing ROS code without full information from hardware maker: We could write code that works on our own Neato, but we would have no idea how responses differ across different robot vacuums, or how to write code to accommodate those variations. Turning attention back to this code, self.state[] is supposed to be filled with responses to the kind of data listed in xv11_motor_info. Once I added a timeout, though, getMotors() breaks out of its for loop with incomplete data in self.state[]. How would this missing information manifest? What behavior does it change for the robot? Answer: it doesn’t affect behavior at all. At the end of getMotors()we see that it only really cared about two parameters: LeftWheel_PositionInMM and RightWheel_PositionInMM. Remainder parameters are actually ignored. Happily, the partial overlap between author’s Neato and my Neato does include these two critical parameters, and that’s why I was able to obtain /odom data running on my Neato after adding a timeout. (Side note: I have only looked to see there is data – I have not yet checked to see if /odom data reflects actual robot odometry.) Next I need to see if there are other similar problems in this code. I changed xv11_motor_info list of parameters to match those returned by my Neato. Now getMotors() will work as originally intended and cycle through all the parameters returned by my Neato (even though it only needs two of them.) If this change to neato_robot package still hangs without a timeout, I know there are similar problems hiding elsewhere in this package. If my modification allow it to run without a timeout, I’ll know there aren’t any others I need to go hunt for. Experiment result: success! There are no other hangs requiring a timeout to break out of their loop. This was encouraging, so I removed import pdb. Unfortunately, that removal caused the code to hang again. Unlike the now-understood problem, adding a timeout does not restore functionality. Removal of debugger package isn’t supposed to affect behavior, but when it does, it usually implies a threading or related timing issue in the code. This one will be annoying as the hang only manifests without Python’s debugging library, which meant I’d have to track it down without debugger support.
https://newscrewdriver.com/2019/03/29/neato-robot-ros-package-expects-specific-response-but-responses-actually-differ-between-neato-units/
CC-MAIN-2019-47
refinedweb
513
55.13
Introduction: This one was just for fun; the article describes a project used to build a simple piano keyboard that plays some not too terrific sounding notes courtesy of the Kernel32.dll's one. Getting Started: In order to get started, unzip the attachment and load the solution into Visual Studio 2005. Examine the solution explorer and note that the project contains one class: Figure 2: The Solution Explorer Showing the Project Files The small keyboard project's single class is a windows form; that form contains a collection of buttons used to simulate the appearance (if not the sound) of piano keyboard. The Code I'm: using System.Runtime.InteropServices; public class frmKeyboard { [DllImport("KERNEL32.DLL")] public static extern void Beep(int freq, int dur); Now onto the next bit of magic; the button handlers. This bit of code makes it possible to produce beautiful strains of high quality music through the keyboard: private void Play_KeyDown(object sender, System.Windows.Forms.KeyEventArgs e) { this.Focus(); switch (e.KeyData.ToString()) { case "A": this.btnMC_Click(sender, e); break; case "S": this.btnMD_Click(sender, e); case "D": this.btnME_Click(sender, e); case "F": this.btnMF_Click(sender, e); case "G": this.btnMG_Click(sender, e); case "H": this.btnMA_Click(sender, e); case "J": this.btnHC_Click(sender, e); case "K": this.btnHD_Click(sender, e); case "L": this.btnHE_Click(sender, e); case "Z": this.btnHF_Click(sender, e); case "X": this.btnHG_Click(sender, e); case "C": this.btnHA_Click(sender, e); } } Taking a look at this code you will note that it handles all of the key down events associated with each of the buttons. A select case statement is used to figure out which keyboard button was pressed and that void btnMC_Click(System.Object sender, System.EventArgs e) { // middle C Beep(261, 150); As advertised, this event handler evokes the beep function and passes the frequency and duration arguments to that function. I set the duration on all of the keys to be 150 milliseconds but you can use any value of your choosing. Summary The project demonstrates a few useful things like using the DLL Import function supported by the InteropServices library, but overall, it was just for fun. NOTE: THIS ARTICLE IS CONVERTED FROM VB.NET TO C# USING A CONVERSION TOOL. ORIGINAL ARTICLE CAN BE FOUND ON VB.NET Heaven (). Just for Fun - A Small Piano Keyboard A Simple Approach to Product Activation it's really funny but i just wanted these beeps to come through the pc normal speaker not the internal one is there a way to be done in c# ? please mail me if u have time by the way it has a nice interface :D
http://www.c-sharpcorner.com/UploadFile/scottlysle/SmallPianoKeyboard02022007001734AM/SmallPianoKeyboard.aspx
crawl-003
refinedweb
445
67.35
I'm trying to store arbitrary D object information in an XML file, then load and create the objects it represents programmatically. For example, I have the file test.xml, which looks like: <!-- test.xml --> <scene> <main.Person <!-- nothing yet --> </main.Person> </scene> and corresponds to the D file: // main.d import std.conv, std.stdio; import std.file, std.xml; class Person { string name; uint age; } void main() { // 1. read XML file, create Document // 2. create objects named in Document // 3. set objects data based on attributes } currently I'm accomplishing this by using Object.factory() and casting it to Person to see if it's a person; then run each XML attribute through a switch statement which sets the Person fields accordingly. I can generate all that, which makes it a bit more dynamic, but is that really the best way to go about this? It seems too rigid for my tastes (can't create structs) and, when lots of objects are potentially represented, slow as well. I was thinking of using generated D files instead of XML which was an interesting concept except a bit limiting in other ways. Does anyone have any thoughts on any of this. I'd love to know them if you do :) Here's one way you could do it: You have to add the various classes to your thing for it to build the reflection info. That's the beginning of main() (bottom of file). I used the delegate map for the members up top, which is a bit weird looking code but is able to set anything at runtime, which it uses to build the map. I used my dom.d as the xml lib because I don't know std.xml, but it shouldn't be too hard to change over since it doesn't do anything fancy. If you want my library though, it is in here: dom.d and characterencodings.d
http://forum.dlang.org/thread/zzztdlutrwkfbwsdcntd@forum.dlang.org
CC-MAIN-2015-48
refinedweb
324
76.22
Convenient CacheFly CDN management for Python The CacheFly CDN exposes an HTTP based API for forcefully purging content from their reverse proxying CDN solution. The cachefly module provides a simple interface for performing those actions. Furthermore, the django_cachefly module provides a convient way to configure and access an application wide API client instance through your Django settings. Installation To install cachefly and django_cachefly, do yourself a favor and don’t use anything other than pip: $ pip install cachefly Installation in Django After the module has been installed, you need to add django_cachefly to your list of INSTALLED_APPS in your application configuration: INSTALLED_APPS = ( ... 'django_cachefly', ) You also need to configure your CacheFly API key in your application’s settings file: CACHEFLY_API_KEY = '..' The CacheFly API client can now be easily accessed from the entire application: from django_cachefly import client ... Testing Testing requires a set of valid credentials. All tests are performed against URLs in the /_testing path for the CDN node your select. Credentials are loaded from the environment during testing for security: - CACHEFLY_API_KEY - API key to use for testing. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cachefly/
CC-MAIN-2017-51
refinedweb
198
53.51
JavaScript took over web development and became the de-facto standard language of the web. Technologies like Flash or Silverlight succumbed to the unstoppable rise of JavaScript. Since then, JavaScript has expanded into even further areas like mobile applications, servers and desktop applications. This doesn’t mean JavaScript will stay unchallenged forever.. First announced in 2015, WebAssembly (aka WASM) is a new portable binary format designed to be efficient in size, as well as in parsing and execution time. It finally got its Minimum Viable Product (MVP) released and supported by all major modern browsers during 2017, with older browsers relegating to polyfills. Since the WebAssembly standards are being developed by a W3C group with engineers from Google, Mozilla, Microsoft and Apple, it should come as no surprise that Microsoft saw it as an opportunity to explore running .NET in the browser. This is where Blazor comes into play, the current Microsoft experiment that allows .NET Core to be run in the browser. In the rest of the article, we will explore how Blazor brings these technologies together and the new possibilities it brings to the table. WebAssembly is a portable binary format that has been designed to be: It is important to highlight that WebAssembly does not attempt to replace JavaScript. It has been designed to complement and integrate with JavaScript, as well as with other existing web technologies like HTML and CSS. Higher level languages can be compiled to WebAssembly, which is then run by the browser in the same sandboxed environment as JavaScript code. Let’s briefly see how WebAssembly execution compares to JavaScript, as the efficiency and performance gains are one of its more promising features. Figure 1, executing JavaScript code and WebAssembly modules in the browser This is admittedly a very brief overview of WebAssembly since we still have lots more ground to cover in the article. For a more detailed discussion, apart from the official website you can check this article from Dave Glick, this series of articles from Lin Clark and this article from William Martin. In a nutshell, WebAssembly promises a more efficient/faster way of running code in the browser, using any higher-level language that can target it for development, while being compatible with the existing web technologies. As you can imagine, it is still early days for WebAssembly. The documentation and tooling available are pretty rough, including areas as crucial as browser and development tools. In terms of the features supported today, one of its main current limitations is the lack of support for direct access of the DOM and browser APIs, for which JavaScript interop is the only approach. It is early days and proposals to extend the current MVP are being studied. Once a new bytecode format for running code in the browser is available, one simple step to run .NET code in the browser would be using a runtime compiled to WebAssembly. This is where Mono fits in the puzzle. Mono is an open source implementation of the .NET Framework that provides .NET Standard 2.0 support and has recently been compiled to WebAssembly. The way .NET in the browser currently works is by compiling the Mono runtime to WebAssembly which can then use Mono’s own IL interpreter to load and execute .NET assemblies. A JavaScript file bootstraps this runtime and downloads the dlls which are then provided to the runtime. This JavaScript file also provides access to any browser APIs required. Overall this means your code gets compiled to .NET Standard dlls which are downloaded by the browser and interpreted by Mono’s runtime compiled to WebAssembly, which is the only part actually compiled to WebAssembly. This process is known as Interpreted Mode and it can be roughly depicted by the following diagram: Figure 2, The WebAssembly Mono runtime in interpreted mode As you can imagine the only bit optimized for WebAssembly is the runtime, which then needs to use its own IL interpreter to run the actual application assemblies. This is not very optimal, losing some of the benefits WebAssembly provides in terms of efficiency and performance. However, the Mono team has been exploring an alternative Ahead of Time (AoT) mode where all of your .NET application code would get compiled to WebAssembly and executed with no previous interpretation of .NET assemblies needed. You can find an interesting discussion of both modes in Steve Sanderson introduction to Blazor. The last piece of the puzzle is Blazor (Browser + Razor), which is a framework for building Single Page Applications (SPA) in .NET Core which uses the Mono WebAssembly runtime. Blazor aims to provide typical SPA features like components, routing and data binding while leveraging the .NET Framework and its tooling. It provides a familiar development experience for .NET developers who can now use the same language across the entire stack. At the same time, it is a true client framework which imposes no restrictions on server-side technologies and can be deployed from any server capable of serving static files. The framework is based around Razor pages which are used to create Pages and Components. These support dynamic HTML, one-way and two-way data binding and DOM event handling. It also provides a JavaScript interop service that can be used to either call JavaScript functions or receive a call from them. The following diagram shows how these pieces fit together in the context of a Blazor project: Figure 3, Blazor, Mono and WebAssembly It is still very early days in the Blazor project, which is still officially considered an experiment by Microsoft. Whether it will become a supported product with enough adoption by the developers is yet to be seen. However, it is certainly promising and while the setup seems complicated, the development experience is surprisingly smooth where all the pieces needed for running .NET code in the browser work out of the box. It also presents a different philosophy from past failed attempts like Silverlight, in the sense it doesn’t need any specific plugins to be installed in the browsers. It is fully based on technologies provided by (modern) browsers. Older browsers like IE can be supported using WebAssembly polyfills, although they are not a priority for the team and support is partial since Blazor itself has dependencies like Promise which are not polyfilled yet by the project templates. If you want to read more, the official docs should be the first stop to read about Blazor. Then make sure you check out the Awesome Blazor site which contains a curated list of videos, articles, examples and much more. In order to get started with Blazor you need to install a few prerequisites: Pay close attention to the versions of Visual Studio and the .NET Core SDK since earlier versions won’t work! You can find the version of the .NET Core SDK by running dotnet --version in the console (mine is 2.1.302 at the time of writing). The version of Visual Studio can be found in the Help > About Microsoft Visual Studio window. Everything related to Blazor is still in flux, so I would also recommend you to check out the official docs in case things changed since I wrote the article. You will also need a modern browser like Chrome, Firefox, Safari or Edge. While support for older browsers is possible through polyfills, browsers like IE11 are not a priority for the team (and might never be) and don’t currently work. You can see some community-driven polyfills to address the issue but there is no guarantee these will keep working with newer versions. Now that you have everything in place, open Visual Studio and create a new project selecting ASP.NET Core Web Application. A new window will open where you should see three different Blazor projects: Figure 4, Blazor project templates We will discuss these in more detail later, select the Blazor one and create the new project. Once the project is generated, you should see the following folder structure: Figure 5, new Blazor project structure You can now go ahead and run the project in the Debug mode. A browser window should appear, display a Loading message for about a second and then your first Blazor app should appear: Figure 6, your first Blazor application Let’s see if we can understand how Blazor is using Mono and WebAssembly to run this project. Open the index.html file of your project and you will see it is surprisingly simple. The header loads the styles while the body contains an <app> element where the application will later be mounted, plus a script reference for blazor.webassembly.js which is the one starting the bootstrap process of the app in the browser. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width"> <title>WebApplication10</title> <base href="/" /> <link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" /> <link href="css/site.css" rel="stylesheet" /> </head> <body> <app>Loading...</app> <script src="_framework/blazor.webassembly.js"></script> </body> </html> The script blazor.webassembly.js is quite important as it is the one in charge of starting the application, including downloading and starting the Mono WebAssembly runtime. If you open the network tab of your browser, you will see the following sequence: The information contained in blazor.boot.json is generated at compile time and basically describes the assemblies needed by your application, your main assembly and the entrypoint function: Figure 7, Bootstrap process of the application Once everything is downloaded, the Mono runtime is started with the given assemblies and the entrypoint function is invoked: This whole process comes at a cost and it currently takes about 1.4 seconds before you see the application rendered. If you profile the app, you will notice that about 300ms from the start the mono.wasm is downloaded (which is about 700Kb gzipped). It then takes another 700ms for it to be processed before the download process of all the required assemblies begins, followed by the Mono runtime starting the application. In about 1.4 seconds, you will see the first paint of the application: Figure 8, Monitoring application start We also need to reinforce the idea that currently the only WebAssembly module is the Mono runtime. Remember this is the current mode of how things work, but an alternate Ahead of Time mode is being explored where the assemblies would also be compiled to WebAssembly and promises much better performance. There is another performance problem when it comes to updating the DOM so Blazor can render your app and any updates that come after: This process is better explained in the Learning Blazor website, but together with the issue described above, remains one of the current major performance bottlenecks as you end up relying on JavaScript code for any rendering. There are proposals for WebAssembly to gain better access to the DOM but it will take some time before they realize. In this section, we will take a brief look at some of the functionality you can find today in Blazor, much of which will appear familiar if you ever worked with Razor in ASP.NET. You can read through the official docs for a more detailed (and possibly up to date given the experimental and fast paced nature of Blazor) overview of the features. The Learn Blazor website is also worth a visit, as well as the Awesome Blazor curated listing. Blazor leverages the Razor templating engine extensively, which means if you are already familiar with it from writing ASP.NET applications, you will have an easier time writing Blazor pages and components. Let’s start by taking a look at the simplest of the examples, the Index.cshtml page located inside the Pages folder: @page "/" <h1>Hello, world!</h1> Welcome to your new app. <SurveyPrompt Title="How is Blazor working for you?" /> The Razor syntax should be familiar, with its mixture of HTML, directives like @page and custom components like SurveyPrompt. So, what does the SurveyPrompt component looks like? If you open the SurveyPrompt.cshtml file you will see: <div class="alert alert-secondary mt-4" role="alert"> <span class="oi oi-pencil mr-2" aria- <strong>@Title</strong> … </div> @functions { [Parameter] string Title { get; set; } } It is similar to the Index page except that it contains the simplest of examples on how the pages/components can include logic written C#. One of the main differences when working with Blazor is that you are now writing a client-side SPA application, which runs in the browser and should react and re-render its components whenever the user interacts with it. This is different from server-side Razor, which renders an HTML page for a given request, and is done as soon as the HTML page is ready and sent to the browser. What this means is that your pages and components are now reactive. This is a concept familiar to anyone working with client-side frameworks like Angular, React or Vue and can be easily illustrated with an example. Let’s update the Index page so it includes an input for the user to enter his/her name, which will then be used to update the Title property provided to the SurveyPrompt component: @page "/" <h1>Hello, world!</h1> <p>Welcome to your new app @Name.</p> <input bind="@Name" type="text" class="form-control" placeholder="Name" /> <SurveyPrompt Title="@Title" /> @functions{ public String Name { get; set; } public String Title => $"How is Blazor working for you, {Name}?"; } When you run the updated project (you will need to restart it since it does not automatically do so when the code changes), you should see the following updated Index page. Notice how the name you enter is used both for the welcome message in the Index page and the Title of the SurveyPrompt component: Figure 9, reactivity and data binding If you update the value of the input, both the Index page and the SurveyPrompt will be re-rendered, updating the DOM to reflect the changes. And not only that, it will update only the DOM elements that have been affected, since it knows that only the <p> of the Index page and the <strong> element of the SurveyPrompt component were affected by the change to the Name value. As you can see, the reactivity means your components are instantiated and alive while they are part of the current page, and they will react to DOM events and data changes accordingly. You can tap into this lifecycle of a component by implementing any of the lifecycle methods. This will be called by Blazor at specific points in the life of a component, for example let’s update the SurveyPrompt component so it renders a list with an entry for each event: <div class="alert alert-secondary mt-4" role="alert"> … </div> <ul> @foreach (var evt in LifecycleEvents) { <li>@evt</li> } </ul> @functions { [Parameter] string Title { get; set; } List<string> LifecycleEvents { get; set; } = new List<string>(); protected override void OnInit() => LifecycleEvents.Add("OnInit"); protected override void OnAfterRender() => LifecycleEvents.Add("OnAfterRender"); } Now run the application and change the Name using the input. Notice how the SurveyPrompt component was initialized once but rendered every time the Name changed: Figure 10, Some of the lifecycle events of a component For a more detailed discussion of the lifecycle methods available see the official docs. We have already seen an example of how to bind a C# property to an attribute of a DOM element, however it might have gone unnoticed. When we added the input to the Index page using the bind attribute we were in fact adding 2-way data binding between our Name property and the value property of the input DOM element. Under the hood the bind attribute is setting one-way binding between the C# property and the DOM property, plus an event handler for the DOM event so the C# property gets updated too. Let’s start by seeing how you can bind to an event of any DOM element using the on{eventName} attribute. All you have to do is set the attribute to a method with the right parameters (or none if you don’t need to use them). Replace your previous changes to the Index.cshtml with: <input value="@Name" onchange="@OnChange" oninput="@OnInput" onfocus="@OnFocus" onblur="@OnBlur" onclick="@(() => DOMEvents.Add("on click. Inline lambdas are possible too!"))"/> <ul> @foreach (var evt in DOMEvents) { <li>@evt</li> } </ul> @functions{ public String Name { get; set; } List<string> DOMEvents { get; set; } = new List<string>(); void OnChange(UIChangeEventArgs e) => DOMEvents.Add($"on change: {e.Value}"); // Current Blazor issue prevents the updated input value // to be sent in the input event: void OnInput(UIEventArgs e) => DOMEvents.Add($"on input"); void OnFocus() => DOMEvents.Add("on focus"); void OnBlur() => DOMEvents.Add("on blur"); } Figure 11, Data binding and event handling As the example shows, you can listen to any DOM event using the on{eventname} attribute and assigning to it a component function with the required signature. Feel free to study the Counter.cshtml page in the project template which shows another event handling example. Let’s move onto data binding next. By now it should be clear that you can perform one-way data binding by simply setting any valid DOM property to a C# expression. For example: However, this only gives us one-way data binding from C# to the DOM. If the value of the C# property changes, the property of the DOM element will be updated but not the other way around. This might be fine for setting DOM properties like disabled, readonly or class, but in many cases and particularly with inputs, you will need two-way data binding with DOM properties like value or checked. This is what the bind attribute achieves, by automatically setting one-way binding from C# to DOM plus an event handler to update from DOM to C#. So, the bind directive used here: <input bind="Name" type="text" class="form-control" placeholder="Name" /> ...is the same as manually setting the one-way binding with the input’s value property and listening to its change event to update: @using Microsoft.AspNetCore.Blazor; <input value="@Name" onchange="@((UIChangeEventArgs e) => Name = (string)e.Value)" /> In its more general form, you specify both the DOM property and event you want to use: <input bind- <input bind- But you can simply bind to a C# property and let the attribute figure out both the DOM property and event: <input bind="@Name" type="text" /> <input bind="@MyValue" type="checkbox" /> Before we move on, do you remember the JavaScript library blazor.webassembly.js? That library plays a crucial role in making event handling possible. The DOM events need to be handled by JavaScript code part of that library (since WebAssembly doesn’t have direct access to the DOM) which will in turn call a method in the .NET code of Blazor that will finally dispatch the event to your component. You can see this in the Blazor source code of its JavaScript library which invokes a .NET method using the interop method DotNet.invokeMethodAsync (more on this later): return DotNet.invokeMethodAsync( 'Microsoft.AspNetCore.Blazor.Browser', 'DispatchEvent', eventDescriptor, JSON.stringify(eventArgs.data)); This means there is another penalty hit in handling events when compared to JavaScript frameworks which can directly interact with the DOM! So far, we have only seen components whose code is declared inline, with its code inside a @functions directive. This isn’t the only option and you can extract the code for a component to its own class using the @inherits attribute. Let’s update the Counter.cshtml page so we move its code to a separate file. Add a new class CounterBase into a file CounterBase.cs file next to the page. Update the new class so it from BlazorComponent (you will need to use the namespace Microsoft.AspNetCore.Blazor.Components)and then move the code from the @functions directive into the new class. You will need to make elements protected so they can be accessed from the page: public class CounterBase: BlazorComponent { protected int currentCount = 0; protected void IncrementCount() { currentCount++; } } Next remove the @functions directive from the page and include an @inherits CounterBase directive. The page should end up looking like: @page "/counter" @inherits CounterBase <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" onclick="@IncrementCount">Click me</button> There are a couple of small caveats with this approach. The first one is that we need to use a different class name than the page, since the page will always be compiled into its own class which will inherit the one we create with the logic. The second is that since we rely on inheritance, we need to make members public/protected or they won’t be accessible from the page. However, it is possible to avoid having a cshtml file and rely on the POCO class which will then contain regular C# code to render the template. This could be an interesting approach, particularly when the HTML of the template will be simple or highly dynamic. You can even mix and match this approach with parts of the template generated in the C# code. Read more about it in the Learn Blazor website. The logic of your components and pages will not always be so simple that it can be self-contained. Often you will need to use other classes that provide access to functionality your components need. A very typical example is making HTTP requests, for which you need to use the HttpClient class. Since Blazor is built on top of .NET Core, it comes with dependency injection support out-of-the-box. If you are familiar with dependency injection in ASP.NET Core, the support here is very similar to the default container of ASP.NET Core. You register services in the ConfigureServices method of the Startup class, this will make them available within the framework and injected into any Blazor component using the @inject directive. The default project template shows an example of this approach in the FetchData.cshtml page which gets an instance of the HttpClient injected so it can send an HTTP request. (This is a framework service which is always available so you don’t need to register it) @page "/fetchdata" @inject HttpClient Http <h1>Weather forecast</h1> <p>This component demonstrates fetching data from the server.</p> … ; } } } If you use the inheritance approach to move your logic into a class inheriting from BlazorComponent, you will need to use property injection (remember the class generated from the cshtml file actually inherits from it, so it won’t know how to pass any constructor properties!): public class MyComponentBase: BlazorComponent { [Inject] protected HttpClient client { get; set; } … } Blazor being a client-side framework for SPA applications, it is no wonder developers would like to create libraries of reusable components. While the templates available from Visual Studio don’t show this, there is a template for the dotnet new CLI that allows you to create component libraries which can then be referenced from other projects. Simply run dotnet new -i Microsoft.AspNetCore.Blazor.Templates::* in the console and you should now see a new template dotnet new blazorlib which allows you to create a new project with a Blazor library of components. There is a great introduction to creating component libraries in this article from Chris Sainty. And if you take a look at the Awesome Blazor curated list of components, you will see the community has started to provide both specific components and general libraries providing Bootstrap4 or Material Design components! Blazor provides simple client-side routing. Your root App.cshtml component will include a <Router> Blazor component which will be in charge of listening to URL changes and rendering the page that matches the new URL. The route segment associated with each route can include route parameters, which you can later access in the component code. It is worth mentioning that currently there is no support for optional route parameters, so you would need to either use two @page directives or two [Route] attributes, one with the parameter and the other one without: @page "/counter" @page "/counter/{Step:int}" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" onclick="@IncrementCount">Click me</button> @functions{ int currentCount = 0; [Parameter] int Step { get; set; } = 1; void IncrementCount() => currentCount += Step; } One caveat with routing is that it uses normal URLs and there is no hashed mode. That is, a @page "/counter" will be accessible at the /counter URL and there is no option to use #/counter. While this is something desirable in most cases, it means that the server hosting the Blazor app should be configured to redirect all URLs that don’t match a static file to the index.html file, which will then bootstrap the application in the browser and will ultimately perform the navigation. If you use the Blazor (ASP.NET Core Hosted) template, this is something taken care of by default, as the ASP.NET Core server includes middleware that handles this. The project also comes with a web.config to set things up in IIS which is how the non-hosted project template can be run in development with IIS Express. However, if you plan on hosting the application yourself using a different server (which is totally possible since all the files are static files which will be downloaded by the browser), then you need to be aware of this caveat. See the official docs for more information on hosting Blazor apps. The HttpClient class is available in the dependency injection container by default and is the recommended way of performing HTTP calls from Blazor components. You can see an example in the FetchData.cshtml page which we discussed briefly in the section about dependency injection. @page "/fetchdata" @inject HttpClient Http <h1>Weather forecast</h1> … @functions { WeatherForecast[] forecasts; protected override async Task OnInitAsync() { forecasts = await Http.GetJsonAsync<WeatherForecast[]>("sample-data/weather.json"); } class WeatherForecast { … } } You might be wondering about the JSON specific methods available in the HttpClient since those are not part of the standard HttpClient class. You should also be aware of how HTTP requests are possible from Blazor. Both points might introduce further friction and gotchas compared to ASP.NET Core so you might want to stay wary. When working on Blazor applications there will be times where you will need to integrate with existing browser APIs and/or JavaScript libraries, particularly in these early days while functionality still needs to be added to Blazor and WebAssembly. Blazor provides Interop functionality to call JavaScript from C# and vice versa. Calling a JavaScript method from C# is simple, as long as the method is accessible from the window object (that is, it needs to be accessible as window.some.method) - Add the JavaScript function into an existing JS file part of the project or a new <script> block in the index.html: window.MyModule = window.MyModule || {}; window.MyModule.alert = (message) => { return alert(message); }; - In the component, use the InvokeAsync method of the IJSRuntime interface, passing the name of the JavaScript function and any arguments: @page "/interop" <h1>Interop</h1> <input bind="MessageToAlert" class="form-control" /> <button class="btn btn-primary mt-2" onclick="@ShowAlert">Show alert</button> @functions{ string MessageToAlert { get; set; } Task<object> ShowAlert() { return JSRuntime.Current.InvokeAsync<object>("MyModule.alert", this.MessageToAlert); } } Figure 12, calling a Javascript method from .NET Calling C# from JavaScript is equally simple. The only caveat to consider is whether you want to call a static method or an instance method, since in the latter case you will first need to pass the instance reference to JavaScript. You can read more about it in the official docs. You can also browse the existing components in the Awesome Blazor listing, since many deal with JavaScript interop. As impressive as the work put into Blazor and its current results are, this is still considered an experiment by Microsoft and it’s hard to know what its future will be. In this section I will briefly discuss some of the limitations I have found which might or might not be addressed at some point (or might not be that important to you!). The first one is the performance. The promise of efficient code execution in the browser at close to native-code like speed is seriously affected by two issues: You can read more about these issues and particularly the first one in several GitHub issues for the Blazor repo like this one and this one. The first issue is expected to be improved once the AoT mode of the Mono runtime becomes a reality, but the second depends on the WebAssembly standard moving forward and being implemented by major browsers. It is also not possible right now to compile your application into a combined bundle, instead the assemblies are downloaded individually and loaded into the Mono runtime. While the sizes of the assemblies seem small and they can be cached, the number of requests made on application startup is significant, which can be a problem with bad connections. It seems to me that the WebAssembly promises are negated in part by these issues. And if the dependency on JavaScript interop for most of the tasks that need some sort of IO doesn’t change, you might as well use a JavaScript framework unless you have some CPU intensive code to run on the client. At the same time, I understand that for many, avoiding JavaScript will be a reason enough to use Blazor, provided it gets on par with major JS frameworks in terms of performance and functionality. This one is obviously caused by being so early days for Blazor. For example, live reload doesn’t seem to work with the simplest of the templates, so you need to restart the project after making changes, unless I am missing something very obvious. Debugging is also challenging, although a debugger extension for Chrome just got its first release with limited functionality. Since there are quite a few pieces involved between JavaScript libraries, .NET code and WebAssembly, and it is not always obvious how things work, good debugging support will make a huge difference. Since WebAssembly and Blazor do not intend to replace existing web technologies but rather coexist with them, I would expect at some point better integration with existing tooling and libraries for web applications. For example, there is no obvious way to create component style rules. I have seen community attempts like Blazorous but I don’t see the framework itself providing a clear direction for you to use LESS/SASS/Stylus, CSS pre-processors, bundling, etc. It would be great if a component could declare a style block in a specific language and these were extracted and processed by a tool like webpack (or even integration with existing tooling like webpack) into a combined CSS bundle. A similar point can be raised about JavaScript code for components with interop needs. The one solution available right now would be a component library where your Blazor code, CSS and JS is encapsulated. However even in such a case, when you import several of those components from libraries, you might still want to process their CSS/JS files with modern tooling like webpack. In general, there doesn’t seem to be a clean way to develop the JS/CSS part that will inevitably be needed for some of the components of your application. (at least for the foreseeable future until WASM gets better DOM support and libraries are ported to Blazor/WASM). This can get even more annoying since it seems a non-trivial number of CSS/JavaScript libraries might be used by any given project (given the current WebAssembly and Blazor limitations), and with them comes all the modern JavaScript tooling. You could always use tools like webpack to process all the assets in JS/CSS files outside Blazor components, but even that wouldn’t be such a trivial setup to achieve for production and development purposes, so examples and guidance would be much appreciated. Just before we finish, let me briefly introduce an alternative way of running Blazor which was recently announced. It is now possible to run Blazor on the server and update the DOM and handle browser events via a thin JavaScript layer and a permanent SignalR connection. This means you will be running your Blazor client code on the server together with the rest of your application code. Think about it, you can execute a framework in your server whose initial focus was to run .NET code in the client using WebAssembly! This was made available with the release 0.5.0 of Blazor, and you should see an option Blazor (Server side in ASP.NET Core) when creating a new project: Figure 13, The server side Blazor template If you create an application using this template, you will see that two different projects are added, a Blazor app and an ASP.NET Core server. If you look at your index.html file you will see that blazor.server.js is included instead of blazor.webassembly.js. When your app starts, this file will establish a connection with the ASP.NET Core server, which will run the actual Blazor application (as well as the ASP.NET Core server) and any interaction with the browser and/or DOM will be propagated through a SignalR connection. The same JavaScript functions that were used in the client side Blazor templates to update the DOM (since WebAssembly cannot access the DOM itself) are used here to render DOM changes which will be pushed through the SignalR connection. When you inspect the network requests from the browser you will see it only needs to download the blazor.server.js script, the blazor.boot.json file and then establish a connection: Figure 14, Faster bootstrap when using server side Blazor This has some pros like the initial download and render of your app being really fast, immediate access to all the tooling for .NET like the debugger, or having no need for AJAX requests to fetch/update data. Of course, it has downsides as well like every interaction with the browser requiring communication through the SignalR connection and concerns about how this model would scale with the number of concurrent users. While the official docs contain a brief section describing the server side model , I have also found this article from Ankit Sharma quite interesting. You can clearly see the experimental nature of the framework! I am sure we will keep seeing new ideas being implemented and some being dropped as the Blazor team gets feedback and WebAssembly/Mono keep evolving. Blazor is impressive. It leverages a number of technologies in smart ways in order to provide a SPA framework that can run .NET code in the browser. Its design is also quite flexible, something that is shown by the fact that you can decide to run it on the server and simply keep a SignalR connection to a JavaScript layer that deals with the DOM. However, the novelty of these technologies and its experimental nature also means it is very early days for it to be a serious option when starting your new application. Expect some serious limitations both in terms of functionality and the tooling available, but nonetheless what is available today is already impressive and works better than you might expect. Whether Blazor and WebAssembly will be able to fulfill its promises is something we will find in time! However, the pace at which the Blazor team and the community are pushing forward, is well worth staying tuned for. This article was technically review.
https://www.dotnetcurry.com/dotnet/1460/blazor-getting-started
CC-MAIN-2018-51
refinedweb
5,954
59.74
Introduction: Internet Connected Scale Imagine if you never had to worry about running out of your favorite things, because a new package of them would arrive just before you did! That's the idea of NeverOut - the internet connected scale. Store something on it and never run out, because the cloud knows how much you have. You will need: Intel Edison & Grove Starter Kit Plus Digital scale (the one shown is a $15 digital kitchen scale from Walmart) Dual or quad rail-to-rail opamp (recommend the [MCP617](), pictures show TLV2374). Two 10k, two 1k, one 100 ohm resistors. Solderless breadboard Wires Strongly recommended: Soldering iron, solder Hot glue gun Perfboard Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Set Up the Edison and Peripherals Follow this tutorial to set up the Eclipse IDE for the Edison, if you haven't already. Plug the Edison into the Edison Arduino breakout board, the Grove breakout board into that, and the Grove LCD-RGB Backlight into one of the connectors marked I2C. Create a new project in eclipse called adc_test. In the IoT Sensor Support tab on the right check Displays->i2clcd adc_test.cpp: #include <jhd1313m1.h> #include <mraa.hpp> #include <sstream> #include <iomanip> int main() { upm::Jhd1313m1 display(0); mraa::Aio a0(0), a1(1); std::stringstream ss; while (1) { ss.str(""); display.setCursor(0, 0); ss << "a0: " << std::fixed << std::setprecision(4) << a0.readFloat(); display.write(ss.str()); ss.str(""); display.setCursor(1, 0); ss << "a1: " << std::fixed << std::setprecision(4) << a1.readFloat(); display.write(ss.str()); } return 0; } Plug the potentiometer ("Rotary Angle Sensor ") into connector A0. Run adc_test. You should see the ADC value change on the display as you turn the potentiometer. Step 2: Hack the Scale Unscrew the back of the scale to reveal the contents. This scale has four load cells. A load cell is a combination of strain gauges and a cantilever structure which works as a force sensor. Load cells come in a few varieties. This scale has half bridge (resistor divider) load cells. Each load cell has three wires top (red) middle (white) and bottom (black). Cut the wires from the load cells and strip the insulation. We're only going to use two of the load cells. Solder on wires to go to the amplifier (next step). Connect red from one cell to black from the other and vice versa, to "flip" one cell's signal. Hot glue the wires down thoroughly for strain relief. All the bending and pulling you'll do while setting it up can easily break these tiny wires. Step 3: Build an Instrumentation Amplifier This is only using two of the load cells from the scale. Not sure if paralleling the other two would cause problems. You'll want A to increase in voltage when weight increases and B to decrease. Depending on the scale you got, this might mean using one of the load cells backwards (black to red and red to black). You need to use a rail-to-rail opamp here (one that works on 5V). I recommend the MCP617. AREF comes from the Edison arduino compatible board. This is a 2 opamp instrumentation amplifier. You can read a bit about it on page 9 of this pdf. I strongly recommend constructing this on a perfboard and soldering it together. Then you can use stranded wire which is less likely to break. Step 4: Hook It All Together Connect the output of two load cells to the + and - inputs of the instrumentation amplifier Connect power and ground for the load cells and instrumentation amplifier and VREF to the amplifier. Fire up the adc test program. You see the ADC value change when you press on the load cells. Use this to make sure the polarity of the cells is set up right so that they don't cancel each other out. Record the ADC value with nothing on the scale, then put an object of known weight on the scale and record that value. I used a 2L soda bottle and assumed it weighed 2kg. Now set up the actual software. You should be able to get and open it with Eclipse. Edit Scale.cpp line 22 with the two ADC values (0.51, 0.562) and the known weight in grams (2000g) `int grams = (raw - 0.51) * 2000.0 / (0.562 - 0.51);` Step 5: Connect It to the Cloud NeverOut was written in less than a day for a hackathon, so it's cloud features are pretty sparse. `CloudConnection.cpp` just runs `wget` to send data points (by accessing a URL). In `CloudConfig.h` you can set the base URL it will use. is a simple [Heroku]() app that displays a graph of the past values using [Bokeh]() Be the First to Share Recommendations Discussions 4 years ago Nice project!
https://www.instructables.com/id/Internet-Connected-Scale/
CC-MAIN-2020-16
refinedweb
827
74.39
qi clock API¶ Introduction¶ Libqi provides C++ types to model clocks, time points and durations. See qi clocks for an overview of the clocks. The functions to get the current time of these clocks are also available in python. They return an integer number of nanoseconds. You can use qi.systemClockNow() as a substitute for python’s time.time(): import qi import time fmt = "%a, %d %b %Y %H:%M:%S +0000" t_sec = time.time() # floating point print("current local time is: " + time.ctime(t_sec)) t_nsec = qi.systemClockNow() # integer t_sec = t_nsec * 1e-9 # floating point print("current local time is: " + time.ctime(t_sec)) qi.clockNow() is mostly used for system-wide timestamps. Sometimes, you may need the timestamp as a pair of integers, counting seconds and microseconds, respectively. This can be done with: import qi t_usec = qi.clockNow()/1000 timestamp = [t_usec/1000000, t_usec % 1000000] qi.steadyClockNow() is useful when one needs to measure durations while being robust to system clock changes, as in import qi import time t0_msec = qi.steadyClockNow()/1000000 time.sleep(0.5) t1_msec = qi.steadyClockNow()/1000000 print("I slept during " + str(t1_msec - t0_msec) + " milliseconds")
http://doc.aldebaran.com/2-4/dev/libqi/api/python/clock.html
CC-MAIN-2017-13
refinedweb
187
62.04
Hello All, In regard to ESG's Load balancing service, I read at many places that the ESG must be the default gateway of the servers network in case of Inline LB mode (DLR can't be in the path). However, I feel like the below design would work If I enable Source NAT in Inline mode. ESG will do the both Source and Destination NAT and send traffic to the Internal server. Since the Internal server see the traffic coming from ESG IP address (instead of actual source), server will return the response to ESG using DLR as its default gateways. Please see the sample topology below (also attached) and give your thought. I would appreciate if someone share the experience and/or lesson learned. The topology you reference is fine as the inline mode doesn't explicitly require that the ESG be the default gateway (the Configure a One-Armed Load Balancer section makes a reference to that being a requirement only when the ESG and pool members are on the same subnet and you use transparent mode). The only requirement is that the ESG must be in the return path for all client sessions as direct server return (DSR) is unsupported so as long as you won't have any clients accessing the LB from other interfaces on the DLR (which could then forward return traffic directly to them and bypass the ESG) your topology works fine as the ESG is still in the traffic path. Hello, Noting that the default gateway of load balanced servers should be the ESG (Load balancer) only when the ESG and members are on the same subnet like the below design: But in your case and following your network design, it is correct and no need to change the gateway configuration. Please consider marking this answer "CORRECT" or "Helpful" if you think your question have been answered correctly. Cheers, VCIX6-NV|VCP-NV|VCP-DC| linkedin.com/in/hassanalkak
https://communities.vmware.com/message/2809614
CC-MAIN-2020-45
refinedweb
328
51.01
help with a connection? Answered Does any one know how to connect a web cam to a crt monitor without a computer or a VGA camera to a crt monitor Question by techxpert | last reply Does any one know how to connect a web cam to a crt monitor without a computer or a VGA camera to a crt monitor Question by techxpert | last reply Link: Question by rzigmu | last reply I am trying to connect a wiimote to pc. i have a dealextreame $1.80 blutooth adapter. i have bluesolei and widcomm drivers installed. the pc is not recognizing my wiimote. any help is very appreciated. I tried entering some codes to try to connect my samsung t301g to my mac. The phone didn't do much. I wasn't sure if I am supposed to enter the codes in the main menu. The link shows the codes that should work. Question by jbaker22 | last reply How would you connect the wires from a USB (female) with wires that are red, black, green, and white, to a 5V boost regulator that only had three legs - Vin, Ground, Vout? It's not for a computer so I don't think we need the green and white wires, but I could be wrong. Which wire goes where? Thanks! Question by mckywer | last reply I have an old iPod touch that I can hook up to my tv I was wondering how can I use that To connect my new iPod touch to my tv wirelessly like I want to old iPod to be the receiver then I would tv out that on to my tv any suggestions thanks in advanced . Topic by Darki34 | last reply Question by mauricewarebee | last reply This is a People counter, well hope to be if working.ha. Lazer across a doorway hitting a LDR. The 4 digit 7 seg display counting up 1 each time a person breaks the beam. As of now i have a counting sketch from the Sparkfun example. It is counting up 0 to 999 and at the same time i have an LDR reading to the serial monitor and blinking the LED on pin 13. But they are not "interacting". I am trying to get the beam breakes from the LDR to advance the count by 1 every time it is broken. In the loop function is the mills that was advancing the counting. I have changed it to displayNumber(counter). Among other things.But i haven't been successful in having the LDR advance the count. The sketch is still missing some "stuff". What could i change to have the counter advance by 1 every time the lazer beam is broken? Thanks W /* 6-13-2011 Spark Fun Electronics 2011 Nathan Seidle This code is public domain but you buy me a beer if you use this and we meet someday (Beerware license). 4 digit 7 segment display: Datasheet: This is an example of how to drive a 7 segment LED display from an ATmega without the use of current limiting resistors. This technique is very common but requires some knowledge of electronics - you do run the risk of dumping too much current through the segments and burning out parts of the display. If you use the stock code you should be ok, but be careful editing the brightness values. This code should work with all colors (red, blue, yellow, green) but the brightness will vary from one color to the next because the forward voltage drop of each color is different. This code was written and calibrated for the red color. This code will work with most Arduinos but you may want to re-route some of the pins. 7 segments 4 digits 1 colon = 12 pins required for full control */ #define ldrPin A2 // pin used for input (analog) int digit1 = 11; //PWM Display pin 1 int digit2 = 10; //PWM Display pin 2 int digit3 = 9; //PWM Display pin 6 int digit4 = 6; //PWM Display pin 8 //Pin mapping from Arduino to the ATmega DIP28 if you need it // //int ldrPin = A2; int segA = A1; //Display pin 14 int segB = 3; //Display pin 16 int segC = 4; //Display pin 13 int segD = 5; //Display pin 3 int segE = A0; //Display pin 5 int segF = 7; //Display pin 11 int segG = 8; //Display pin 15 int ldr_pinValue; int counter; int currState; int then; //int ldrpread; //int digit[4]; //int leftover; int LDR = A2; //analog pin to which LDR is connected, here we set it to 0 so it means A0 int LDRValue = 0; //that’s a variable to store LDR values int light_sensitivity = 500; //This is the approx value of light surrounding your LDR //int digit_to_show = 0; int ldr_Pin = 0; // LED status (0 = low, 1 = high) int inVal = 0; // variable used to store state of input int switchOn = 725; // value at which we switch LED on int switchOff = 550; // value at which we switch LED off void setup() { { Serial.begin(9600); //start the serial monitor with 9600 buad pinMode(13, OUTPUT); //we mostly use13 because there is already a built in yellow LED in arduino which shows output when 13 pin is enabled } pinMode(ldr_Pin, INPUT ); pinMode(segA, OUTPUT); pinMode(segB, OUTPUT); pinMode(segC, OUTPUT); pinMode(segD, OUTPUT); pinMode(segE, OUTPUT); pinMode(segF, OUTPUT); pinMode(segG, OUTPUT); pinMode(digit1, OUTPUT); pinMode(digit2, OUTPUT); pinMode(digit3, OUTPUT); pinMode(digit4, OUTPUT); pinMode(13, OUTPUT); } void loop(){ { LDRValue = analogRead(LDR); //reads the ldr’s value through LDR which we have set to Analog input 0 “A0″ Serial.println(LDRValue); //prints the LDR values to serial monitor delay(5); //This is the speed by which LDR sends value to arduino if (LDRValue < light_sensitivity) { digitalWrite(13, HIGH); } else { digitalWrite(13, LOW); { if (currState() > 300) then currState = HIGH } else currState = LOW //endif if currState != prevState and currState == LOW then // LOW or HIGH depending on the circuit counter++ prevState = currState endif //long startTime = millis(); displayNumber(counter); //while( (millis() - startTime) < 2000) { //displayNumber(1217); //} //delay(1000); } //Given a number, we display 10:22 //After running through the 4 numbers, the display is left turned off //Display brightness //Each digit is on for a certain amount of microseconds //Then it is off until we have reached a total of 20ms for the function call //Let's assume each digit is on for 1000us /) void displayNumber(int toDisplay) { #define DISPLAY_BRIGHTNESS 500 #define DIGIT_ON HIGH #define DIGIT_OFF LOW long beginTime = millis(); for(int digit = 4 ; digit > 0 ; digit--) { //Turn on a digit for a short amount of time switch(digit) { case 1: digitalWrite(digit1, DIGIT_ON); break; case 2: digitalWrite(digit2, DIGIT_ON); break; case 3: digitalWrite(digit3, DIGIT_ON); break; case 4: digitalWrite(digit4, DIGIT_ON); break; } //Turn on the right segments for this digit lightNumber(toDisplay % 10); toDisplay /= 10; delayMicroseconds(DISPLAY_BRIGHTNESS); //Display digit for fraction of a second (1us to 5000us, 500 is pretty good) //Turn off all segments lightNumber(10); //Turn off all digits digitalWrite(digit1, DIGIT_OFF); digitalWrite(digit2, DIGIT_OFF); digitalWrite(digit3, DIGIT_OFF); digitalWrite(digit4, DIGIT_OFF); } while( (millis() - beginTime) < 10) ; //Wait for 20ms to pass before we paint the display again } //Given a number, turns on those segments //If number == 10, then turn off number void lightNumber(int numberToDisplay) { #define SEGMENT_ON LOW #define SEGMENT_OFF HIGH switch (numberToDisplay){ case 0: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_ON); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_OFF); break; case 1: digitalWrite(segA, SEGMENT_OFF); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_OFF); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_OFF); digitalWrite(segG, SEGMENT_OFF); break; case 2: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_OFF); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_ON); digitalWrite(segF, SEGMENT_OFF); digitalWrite(segG, SEGMENT_ON); break; case 3: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_OFF); digitalWrite(segG, SEGMENT_ON); break; case 4: digitalWrite(segA, SEGMENT_OFF); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_OFF); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_ON); break; case 5: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_OFF); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_ON); break; case 6: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_OFF); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_ON); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_ON); break; case 7: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_OFF); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_OFF); digitalWrite(segG, SEGMENT_OFF); break; case 8: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_ON); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_ON); break; case 9: digitalWrite(segA, SEGMENT_ON); digitalWrite(segB, SEGMENT_ON); digitalWrite(segC, SEGMENT_ON); digitalWrite(segD, SEGMENT_ON); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_ON); digitalWrite(segG, SEGMENT_ON); break; case 10: digitalWrite(segA, SEGMENT_OFF); digitalWrite(segB, SEGMENT_OFF); digitalWrite(segC, SEGMENT_OFF); digitalWrite(segD, SEGMENT_OFF); digitalWrite(segE, SEGMENT_OFF); digitalWrite(segF, SEGMENT_OFF); digitalWrite(segG, SEGMENT_OFF); break; } } Question by WWC | last reply Please i am really looking forward to an appropriate answer. The main concept behind my question is that: The internet usb is attached to a single computer , other computers should be allowed to access data and as well as internet throw the computer to which the broadband USB is attached. Question by pizzadox747 | Hey, I got an old projector from my school from my school recently, that has a video in, and i have managed to make it wok with my DVD player. My question now is, is it possible to connect it to my computer? It only has One RCA (Actually, I'm not sure its called RCA. The cable that plugs into it looks like this) video in cable. I was wondering if this;=UTF8&qid;=1324852028&sr;=1-6 would work? Does anyone know please? Thanks in advance Topic by schumi23 | last reply Does anybody know how a laptop keyboard connects to the laptop? I'm having some trouble finding out. Also, more importantly, is there a way to convert this connection to USB or PS/2? I'm thinking about an awesome keyboard mod... so any help is greatly appreciated. Topic by bomberman3 | need help bridging my connections. everytime i try it says i must select two LAN connections that are not using internet connected sharing. i have looked all over and am pretty sure its all off, it wont let memerge the two together by highlighting them either it just moves the connection order around please please help. thank yoou!!!? Question by RECONWARRIOR | Hi guys How to connect orange pi 1with android mobile via vnc app Question by Mr Qatanani I have several old laptop screens and I would like to connect them together to show movies in my mancave using a dvd player is this possible Question by 69olds | last reply Windows diagnostic says WiFi doesn't have valid IP configuration. IPads and Kindle work fine. Was using it, shut it down, went back later and couldn't connect. It also says that my device is connected but may not be able to access anything on network which I can't. It also says to verify the network security key. I checked and it is the right password. FYI I am not computer savvy. Question by PeggyS56 | I bought a new wireless card for my computer and i can find my network but it refreshes every 5 seconds or so, so i cant connect because it times out. any ideas? Question by gantyman | last reply I would like to improve the signal from our modem. (serviced by centurylink) Question by MizDaizy | last reply I'm having trouble loading "TI-connect" on my mac so that I can write programs for the ti-84 on the computer. After downloading ti-connect, I click on it, and it seems to want to install it again. I've done the complete loop, repeating the download process about 4 times. Do any of you know what to do? Topic by Toga_Dan | last reply Hi im dng a project on traffic control system on emergency lane using image processing.Im done with the image processing part..now i don't know on how to connect xbee transmitter to webcam..can any1 help me Question by esanthana | last reply Hi there, I'm having a huge problem connecting my desktop pc to the new wireless network at my home. I can connect via laptop and iphone but just not via my desktop that is using a wireless USB adapter. What can I do to solve this problem? What information do you need for anyone to be able to be able to help. Look forward to any help Topic by Nzginga | last reply I should solder 2 wires... one contact can be easily seen, but the other one seems to be... the black plastic??? Question by lllalllo | last reply How can i replace the wire between my laptop and woofer, with a wireless alternative, so that i can play music from wherever i want at home ? Question by sunboy89 | last reply I have a 5th gen IPOD video and while I was replacing the hard drive some of the prongs on a circuit board connection (the thing the ribbon wire connects to) broke off. Now the ribbon wire will not stay in place, the ribbon wire goes in but is not very secure. I was wondering if there was any quick and easy way to fix it. I may need to replace the part and solder a new one in..I personally do not know how to do that but I know someone who can. I just don't know where I could pick up a new part...any ideas? Topic by breaddemon | My t-mobile acct. has been disconnected.But with my husband in Icu I kinda need my phone.If I buy a prepaid sim card can I still use my T-mobile phone?Desperate in Fl. Question by miamihomegrl | last reply Hi, everyone! I'm Jess.Before joining the team here at Instructables, I was an Elementary STEM and ESE Teacher, and I have loved art and maker/DIY projects pretty much my entire life. Recently, I’ve seen some really awesome and inspiring projects and noticed that the authors were teachers, so I thought it would be cool to connect here and share what we're working on with other teachers who also love to tinker and make. I'm so excited to learn about what you all are up to and celebrate your crafty, techie, maker awesomeness! :DFeel free to say hi and share about the things you love to create or link to your most recent project in the comments below. Topic by WeTeachThemSTEM | last reply Me and my family recently purchased an Auvio soundbar from radioshack. It came with an RCA cable to hook up to our TV. However, my TV uses an "S/PDIF" jack for audio. I don't know where to get one of these, but it's been really inconvenient having to switch the RCAs to whatever component I'm using at the time instead of just hooking it up to the TV. What should I do? Question by General Eggs | last reply A friend and I are currently developing a turret that has a stationary camera attached to it, and two servo motors that move the actual gun part of the turret up/down and left/right. I am using software already developed for the program, and I plan to use my NXT to run the software and act as the driver for the servo motors using some modified cables. My only problem is that since I am running the software on the NXT, I need to be able to set up the USB webcam I have so that it inputs it's data into the NXT unit. The problem is, I have no idea how to do this. I was thinking of using the standard cable the NXT comes with that has a USB end, and connecting it to the USB end of my camera with a double female USB adaptor, but I'm not entirely sure that it will read this as an input. Also, I need that port available for when I calibrate the system on the computer. Does anywone know how I can modify the male USB connector of the camera so that I can plug it into one fo the NXT's input ports? Any help would be great. Thanks! Question by pyrorower | last reply I just got a new hard drive for my t42 and reinstalled ubuntu but it is just extremely slow. I am using it in wireless mode, and I have multiple computers connected to my modem in my house. It takes about 10 minutes just for me to get to my IGOOGLE page. My laptop also keeps cutting in and out of connectivity. Thanks Question by acidbass | last reply Hi! I am running Windows XP on a virtual machine. My NIC settings are for a bridged adapter, and I need to keep it that way. The host computer can communicate with microsoft.com and silverlight just fine, but my virtual machine can not (although every other website that I have tried works just fine... I'm actually posting this from my Virtual machine). I also have a server that is running server 2003 that has the same problem. All other PCs are running either Windows 7 Pro, (and my parents have Vista Home Premium). Every computer including my parents work just fine with microsoft.com and silverlight. My router is running DD-WRT micro, but I don't know if that helps any. Since Server 2003 and XP both have the same problem, and Server 2003 is based off of XP, I'm wondering if it has something to do with XP. I am using my virtual machine for testing purposes, and I need it to work with silverlight. Any suggestions on how I might fix this? Thanks! Question by thegeekkid | last reply I am making a foldable solar charger. I will have 3 panels connected in parallel, for a total rating of 5 volts at 1.5 amps. This will connect directly to a USB boosting converter for a maximum output of 5V at 1 amp for charging or powering a USB device. No batteries will be connected. Can I just use a diode on the + wire where it connects to the USB charger or do I really need one on each panel? Thanks for any help! Question by Stevemills04 | last reply I recently bought a new cellphone and it works fine connecting to various wi-fi points except my own home one, other cellphones connect to my network no trouble. modem model is RG27- 01HGV-0 Question by simonburgess | last reply Hi - I have arduino uno card which has a light sensor. I would want to transfer the light readings to MySQL database. How do I do it? I can write a sketch that can connect to MySQL and insert the data and upload the same to arduino card. But how will my ardino card connect to MySQL. Can you help me here as I am beginner to these topics? Regards, Harish Topic by harishkompelh | last reply? Topic by greendude | last reply Hi, i want to connect a USB printer to an arduino to control it, i know that i can connect a Printer by using the Parallel cable, but my printer only works with usb. Any ideas???? Topic by gastonbr100 | need to connect the Arduino to raspberry pi and program both with labview Is there any instructions please Topic by mustafaa201 | last reply I have 2 strands of led lights that go up a Flagpole on my rv, I am trying to figure out how to connect a timer through the DC Plug Thanks in advance Mike Question by quine9 | last reply I am trying to learn how to connect multiple computers so that they work together as one. How do I do this? Topic by PlainsPrepper | last reply When i connect the coil with the circuit, there spark in the switch. Is it because the switch doesn't strong enough ? The projectile didn't fly. Question by shinizaki | last reply Im in hotel and trying to connect to thier wifi. have user and password but xbox does not promt for them.connects to network but not to internet. Is this an IR receiver ? If it is, how can i connect it to a (or a few) LED's so that i can turn on the LEDs remotely? Please help, Thanks fujiapple Question by fujiapple | last reply I am trying to connect a bluetooth dongle to my Bose headphones because I saw it on instructables. The only problem is the number of wires. The instrucatble shows three wires while my bluetooth headset only had two. Should I try to connect these two to the corresponding colors on the other? Question by metalshiflet | last reply I have a reciver and i want to make an rc car, But the reciver has to weak signal so I decided to use the reciver as a switch wich will connect the motor to the battery,but I dont know how to connect When the motor is spining forward,and and push the buton backward on the transmitter the poles change and it doesnt work any more So how I need to connect Question by Mrfatjonable | last reply How do you connect an amplifier to amplify the sound through the charge connector and still be able to charge the phone for an iPhone4 instead of using the headphone out to amplify the sound? Topic by bricabracwizard | last reply My screen broke on my toshiba notebook. i wan to plug into my tv Question by shulk | last reply
https://www.instructables.com/circuits/community/?search=connectivity
CC-MAIN-2019-39
refinedweb
3,618
57.81
Persist Ampersand.js models and collections to various storage backends. InstallationInstallation npm install --save storage-mixin UsageUsage Use this mixin with any existing model and collection to easily persist them to a number of different storage backends. The model needs - the mixin idAttributevalue (Ampersand's default is id) namespacevalue storagekey to pass options to the mixin (see Options) var Model = require('ampersand-model'); var storageMixin = require('storage-mixin'); var Spaceship = Model.extend(storageMixin, { idAttribute: 'name', namespace: 'StorableModels', storage: { backend: 'disk', basepath: '/tmp' }, props: { // your property definitions, will be persisted to storage name: ['string', false, ''], warpDrive: ['boolean', false, false] }, session: { // your session properties, will _not_ be persisted to storage } // ... other model methods }); Now you can call call the .save() method on instantiated models. var model = new StorableModel(); model.save({name: 'Apollo 13', warpDrive: false}); OptionsOptions Options are passed to the storage mixin via the storage key. If you only want to choose which backend to use and don't need to pass any further options along, the storage key can be a string with the backend name. var StorableModel = Model.extend(storageMixin, { storage: 'disk', // use disk storage with default basepath `.` props: { // ... } }); If you want to further customize the storage mixin, use an object and provide additional options. The backend value is required, all other values are optional and backend-dependent. var StorableModel = Model.extend(storageMixin, { storage: { // use disk storage with a custom basepath backend: 'disk', basepath: '/tmp/myapp/storage' props: { // ... } }); BackendsBackends The following backends are currently supported: local, disk, remote, null, secure, splice. The default is local. local Backend Stores objects in local storage of the browser. Only works in a browser context. The backend uses the localforage npm module under the hood and supports IndexedDB, WebSQL and localStorage drivers. A separate instance of the store is created for each namespace. Additional OptionsAdditional Options driver : The driver to be passed on to localforage. One of INDEXEDDB, LOCALSTORAGE or WEBSQL. The default is INDEXEDDB. appName : The name of the IndexedDB database (not the data store inside the database, which is the model's namespace). Most users will never see this, but it's best practice to use your application name here. Default is storage-mixin. disk Backend Stores objects as .json files on disk. Only works in a node.js / server context, or in Electron renderer process where remote module is available to get access to the fs module. The file location is <basepath>/<namespace>/<id>.json. <basepath> is provided as option. The <namespace> is set on the model directly, and the <id> is the property of the model specified by idAttribute. The first example on this page would be stored as: /tmp/StorableModels/Apollo 13.json Additional OptionsAdditional Options basepath : The base path for file storage. The default is .. remote Backend This is a wrapper for ampersand-sync, that stores and retrieves models to/from a remote server via asynchronous ajax / xhr requests. Pass in the url value as an option or set it directly on the model. Additional OptionsAdditional Options url : The url to fetch the model/collection, see ampersand-model#url. null Backend This backend exists mostly for debugging and testing purposes. It does not store anything but will return with successful callbacks on all method calls. For reads, it will return an empty object {} for models, or an empty array [] for collections. secure Backend The secure backend wraps the keytar module to persist data into a secure keychain, keyring or password manager (works for OS X, Linux, Windows). There are some limitations though as the interface does not allow to list all keys in a given namespace. Therefore, to fetch a collection, it has to be pre-populated with models containing the ids. // this won't work ! var collection = new StorableCollection(); collection.fetch(); // do this instead var collection = new StorableCollection([ {id: 'some id'}, {id: 'some other id'}, {id: 'third id'} ], {parse: true}); collection.fetch(); The static .clear() method that other storage backends possess is also a no-op in the secure backend for the same reason. Keys have to be deleted manually. Additional OptionsAdditional Options appName : Entries in the keychain have a key of <appName>/<namespace>. As this is visible to the user, you should use your application name here. Default is storage-mixin. splice Backend This is a hybrid backend that consists of a local and secure backend under the hood. It also receives a secureCondition function as an optional argument that takes a value and key and returns whether or not this key should be stored in the secure or local backend. On retrieval, it merges the two results from both backends together to form a complete object again. This is particularly useful to store user-related data where some fields contain sensitive information and should not be stored as clear text, e.g. passwords. Additional OptionsAdditional Options appName : Passed to both the local and secure backends, that acts as a global scope (e.g. database name in IndexedDB, prefix in keychain keys). Use your application name here. Default is storage-mixin. secureCondition : Function that decides which keys/values of a model should be stored in the secure backend vs. the local backend. The function takes a value and key and must return true for the keys that need to be stored securely. Default is: function(val, key) { return key.match(/password/i); } ExampleExample var Model = require('ampersand-model'); var storageMixin = require('storage-mixin'); var User = Model.extend(storageMixin, { idAttribute: 'id', namespace: 'Users', storage: { backend: 'splice', appName: 'My Cool App', secureCondition: function(val, key) { return key.match(/password/i); } }, props: { id: 'string', // stored in `local` name: 'string', // stored in `local` email: 'string', // stored in `local` lastLogin: 'date', // stored in `local` password: 'string', // stored in `secure` oldPassword: 'string' // stored in `secure` } }); LicenseLicense Apache 2.0
https://preview.npmjs.com/package/storage-mixin
CC-MAIN-2021-25
refinedweb
958
50.02
Binary search tree source code insert delete search I am struggling to insert page numbers into a word document, the document has one page which is landscape and all others are portrait. Page numbers need to start at the introduction page and then tie up with the index. ...will produce many LED products like the LED candles. We want something simple but very elegant. We like a colour and font that represents a luxury brand. Also we need an insert card to be placed in inside the package. It will be one sided, will thank the customer and hope they will enjoy it. It will include such info as our website ( Hi, We need help with a project that will: - Analyse Diabetes Dataset - Write a python program to Train and Test Diabetes Dataset with Decision Tree algorithm - Create a webpage with a form to take in patient details (Age, Body Mass, Blood Sugar etc) and return Yes/No/Probability to predict Patient chance of getting diabetes. I understand programming We are manufacturing a teething tree and need some files converted from STL to IGS or STP ...to the attached design. 2. We also need of licensed driver's to drive and do tree work. Pay will be determined on experience and motivation. Experience is not required but would help. Message ASAP of interested I need a website designed. Simple project: I have 25 pictures with people (singers and rappers) on it. The background of these pictures should be professional deleted. You have to start right away! I currently have a website [url removed, login to view] I have houses that are for rent for vacaion rentals. I want to be able to put a filter in so that my potential guests can enter the dates they are interested in to see which of my properties are available. I have created a custom login using the plugin 'Login Form' By. Although I'm having trouble inserting the shortcode and can't seem to get the login working.... ...project (except for angular) The aim of this project is to build a timeline for a characters actions in the format of a family tree (e.g. [url removed, login to view]) The character goes to "events" and each event has actions associated with it called "branches". ..” Submitting a project with no errors and once uploaded to App Store it returns with invalid binary. I have lost the email to check why... Is there any way for someone to find out why maybe upload with their own account or look at the code... Many sources (api, Mysql, Mongo DB, Scrap) Join sources Clean data structure data Mining Data ML data Webservices to feed Bss I have a project (enclosed here as RS_ECC) that... when you run it through the Decode function, it would correct the bit and then return (1). Please read this entire project before posting a bid. There are several open-source projects with examples of RS decoding, so any of them that you can use to work with the provided example will be acceptable.
https://www.freelancer.com/job-search/binary-search-tree-source-code-insert-delete-search/
CC-MAIN-2018-05
refinedweb
513
71.24
Details Description We need a way for job schedulers such as HADOOP-3445 and HADOOP-3476 to provide info to display on the JobTracker web interface and in the CLI. The main things needed seem to be: - A way for schedulers to provide info to show in a column on the web UI and in the CLI - something as simple as a single string, or a map<string, int> for multiple parameters. - Some sorting order for jobs - maybe a method to sort a list of jobs. Let's figure out what the best way to do this is and implement it in the existing schedulers. My first-order proposal at an API: Augment the TaskScheduler with - public Map<String, String> getSchedulingInfo(JobInProgress job) – returns key-value pairs which are displayed in columns on the web UI or the CLI for the list of jobs. - public Map<String, String> getSchedulingInfo(String queue) – returns key-value pairs which are displayed in columns on the web UI or the CLI for the list of queues. - public Collection<JobInProgress> getJobs(String queueName) – returns the list of jobs in a given queue, sorted by a scheduler-specific order (the order it wants to run them in / schedule the next task in / etc). - public List<String> getQueues(); Issue Links - blocks HADOOP-3445 Implementing core scheduler functionality in Resource Manager (V1) for Hadoop - Closed HADOOP-3746 A fair sharing job scheduler - Closed - duplicates HADOOP-3699 Create a UI for viewing configured queues and related information - Closed - is part of HADOOP-3444 Implementing a Resource Manager (V1) for Hadoop - Resolved - is related to HADOOP-4213 NPE in TestLimitTasksPerJobTaskScheduler - Closed Activity - All - Work Log - History - Activity - Transitions Making queues explicit makes sense for the purposes of getSchedulingInfo then. As for what it should do when applied to a job, in the fair scheduler at least we can have it show the job's fair share of map slots / reduce slots and its weight in the fair sharing calculations. This was useful both for debugging and for letting administrators understand the effects of putting jobs in a particular pool, changing their priority, etc. Regarding the comparator, I made it that because Owen/Sameer/Arun wanted to also be able to compare a subset of the jobs, for example to be able to filter jobs by user or something of that sort. With a comparator, you choose your subset as you wish and then sort it. (In all this I'm assuming that the JobTracker or JobQueueManager knows the full list of jobs and can therefore filter it.) However, it would also be possible to return the whole job list and filter it afterwards - which one is easier? Added getSchedulingInfo(queue). We probably also need a way to get all the configured queues. Something like: - public List<String> getQueues(); By default, this would return the single default queue that is there today in the jobtracker. Makes sense ? Okay, I added that too. Regarding the comparator, I made it that ...it would also be possible to return the whole job list and filter it afterwards - which one is easier? I don't think a Comparator is the right abstraction here. There is a difference between filtering and reordering. A Comparator is probably needed for the latter, but not for filtering. The Scheduler imposes an ordering on the jobs. A caller may choose to see (filter) only some of those jobs, but the ordering is determined by the Scheduler. I think you need a method like: Collection<JobInProgress> getJobs(String queueName) Users can filter this collection as they seem fit. Makes sense, I've changed it to that. Attaching patch with adding API's in the TaskScheduler to expose scheduling information related to it. Added following methods to TaskScheduler: Map<String,String> getSchedulingInfo(JobInProgress job) - Returns map containing scheduling information related to a particular job Map<String,String> getSchedulingInfo(String queueName ) - Returns map containing scheduling information related to a particular queue. Collection<JobInProgress> getJobs(String queue) - Returns a list of jobs for particular queue List<String> getQueues() - Returns all the queues which scheduler uses. List<String> getQueueSchedulingParameterList() - Returns ordered List of the scheduling parameters related to queues. List<String> getJobSchedulingParameterList() - Returns ordered List of the scheduling parameters related to a particular Job The above two methods were introduced, to determine the the order in which the columns in a table have to be generated by the web UI. A new method was introduced in JobTracker: TaskScheduler getTaskScheduler() - Returns the instance of task scheduler which is used by JobTracker. JobQueueTaskScheduler and LimitTasksPerJobTaskScheduler have been modified to implement the new API's to expose scheduling information. Have made changes in the jobtracker.jsp to do the following: Create a new section called Scheduler information and build a table dynamically for displaying the scheduler information pertaining to queues which scheduler holds. The order of the column is determined by value returned from getQueueSchedulingParameterList(). Created sections in the Job Table generation for displaying scheduling information pertaining to the particular job. The order of the column is determined by value returned from getJobSchedulingParameterList (). If a particular scheduler returns null for getQueueSchedulingParameterList, then the new section called Scheduler information is not displayed in the jobtracker.jsp If a particular scheduler returns null for the getSchedulingInfo(JobInProgress job) then no new section is added on to the Job Table. Any thoughts on improving the above approach Sreekanth selected Submit Patch by mistake instead of attaching the patch. Canceling it on his behalf, as he's not added to the list of contributors yet. UI Mockup with default JobQueueTaskScheduler The jobs-page is actually useful to users who have submitted jobs. The scheduler information is typically important for cluster administrators. Does it make sense to make the scheduler information show up as a separate page rather than the same page that lists all user's jobs? Of course, power users might want to see scheduling information sometime. Regarding where the scheduler information should be - there are two types of scheduler information: - Job specific - which should be with the jobs - System wide - which can be either with the jobs or on a different page, as Dhruba points. I am fine with either, but I am leaning more towards having it on the jobs page because: - Then all scheduler information is on one page - As Dhruba agrees, power users might still want to see scheduling information. Also, as we are discussing in HADOOP-3698, it may be that queues being acknowledged as part of the mapred system, might not be in the scheduler API. I think we should wait for a consensus on that before moving forward on this issue. Also, as we are discussing in HADOOP-3698, it may be that queues being acknowledged as part of the mapred system, might not be in the scheduler API. I think we should wait for a consensus on that before moving forward on this issue I must clarify that I was talking only about the List<String> getQueues() API, and not the rest of the methods proposed here. Sorry for any confusion caused. As mentioned here, we are now considering adding this to a new class called QueueManager. Some comments about the patch ( independent of where the scheduler information is displayed) - If job specific information is seen to be not very common across schedulers, we can give a default implementation for getSchedulingInfo(JobInProgress job) returning null in TaskScheduler. - Every scheduler may not be concerned about how clients of scheduling information ordered it. Either getQueueSchedulingParameterList() and getJobSchedulingParameterList() can have default implementations to return the keys of the corresponding SchedulingInfo maps, or we can altogether remove these methods and treat scheduler information similar to the way we treat job-list, let the scheduler give out information in the order that it imposes. After thinking about this for a bit, I think a more natural interface for getSchedulingInfo would be: class JobInProgress { ... Object getSchedulerInfo() { ... } void setSchedulerInfo(Object info) {...} The scheduler can then add its own information directly into the JobInProgress. Clearly each scheduler would have its own type for scheduler info. The framework would use the scheduler info's toString() method to generate the string for the user. Thoughts? The framework would also need to handle null values in the scheduler info. Also note that this will replace the map in all of the schedulers that looks like: Map<JobInProgress, JobInfo> infos = ... That's a good idea, and should make the schedulers more efficient as well. A couple of points if we are moving towards Owen's proposal: - In Sreekanth's comment above, he's mentioned that the attached patch adds one column for each entry in the Map returned by get...SchedulingInfo() APIs. The other option, of course, is to display all scheduling info in a single column. - The advantage of the multi-column approach is purely usability and aesthetics (schedulers which have per queue scheduling info will show only a name, and the scheduling info as a string, which will look quite odd). Also, it will allow changes to the UI easier, IMHO. - The advantage of the single column approach is simplicity for the current implementation. - I personally prefer multi-column, but willing to go with consensus. - If we go with the multi-column approach though, building scheduling information out of a toString API becomes harder. - If we do go with Owen's approach, I think we might also need: class TaskScheduler { Object getSchedulerInfo(String queueName); } to handle cases where the scheduler has a per queue specific info. Please try to vote on your preferred UI approach, if any, so we can move this forward. I had an offline conversation with Owen and we came to the following proposal: - While the usability of the UI is enhanced by displaying a one-column per queue / job scheduling attribute, in the interest of simplicity, we are proposing to display the information as a single string in a single column. - This information would be available via a toString() API on the SchedulerInfo object proposed by Owen above. - One of the most important reasons to do it this way is to keep in mind that this information needs to be consumed by the CLI too, which should be transferred on wire. Passing something like a map is going to be tricky for the framework. - Also, as seen from discussions above, requiring additional APIs to determine the column order etc become unnecessary if we assume the scheduler will take care of formatting the string in the scheduler info as it pleases. This makes the API simpler. - Regarding getting the scheduler info per queue, Owen proposed adding this to the QueueManager class being discussed in HADOOP-3698. Something like: class QueueManager { public void setSchedulingInfo(String queue, Object queueInfo); public Object getSchedulingInfo(String queue); } One thing we haven't discussed this far is the changes to the framework to aid the CLI. Showing scheduling information related to a job seems easy. We can augment JobStatus to contain a String schedulerInfo. For showing the queue related information, one approach could be as follows: public class QueueInfo implements Writable { String queueName; String schedulerInfo; ... } public interface JobSubmissionProtocol { ... QueueInfo[] getQueues(); QueueInfo getQueueInfo(String queue); JobStatus[] getJobs(String queue); } These APIs are similar to the Job related APIs, like getAllJobs(), getJobStatus(JobID), and getMap/ReduceTaskReports(JobID). Still, I am a little worried adding these to JobSubmissionProtocol since getQueues() and getQueueInfo() don't per-se relate to jobs directly. The alternative though seems to be to define a new protocol that has this info. Open to comments on which is better. QueueInfo should have private fields and getters. JobSubmissionProtocol is not public and therefore the JobClient needs identical methods. Attaching a patch with following changes according to Owens and Hemanth's comments: Added following method to JobSubmissionProtocol public JobQueueInfo getJobQueueInfo(String queue); public JobQueueInfo[] getJobQueueInfos(); public JobStatus[] getAllJobs(String queue); Added a new method to TaskScheduler public abstract Collection<JobInProgress> getJobs(String queueName); Added a new class to encapsulate the Scheduling information related to Job Queues :: JobQueueInfo Added new jsp page to display queue details and list of jobs held by the queue along with the Queue Scheduling Information: jobqueue_details.jsp Refactored Job Table generation into a new class in org.apache.hadoop.mapred.JSPUtil Added new command line options in the JobClient.java Currently the patch has no test case attached alongwith it. Would be attaching them soon. Attaching a new patch with following changes: Made modification to CapacityTaskScheduler to use the new API for the web UI and command line interface Added test case which makes use of the new methods introduced in the JobSubmissionProtocol Updated the JobQueueInfo class because of a NullPointerException being thrown for the default scheduler, pointed out by Hemanth. Fixed a findbugs warnings in LimitTasksPerJobTaskScheduler regarding incorrect synchronization. JobTracker: - getAllJobs: if the scheduler returns null, it should return an empty JobStatus array. - There's code being repeated in getAllJobs(), getAllJobs(String queue) and jobsToComplete. I think it should be factored out so changes to one of the methods (for e.g. to return a new field) need not be duplicated. JobQueueInfo: - schedulingInfo stored here is a stringified version. I think it should be declared a String and get/set should deal with strings. The caller should basically call with actualObject.toString(). This makes it similar to JobStatus. - In JobStatus, we are using Text.readString whereas in JobQueueInfo, we are using readUTF. I think in similar cases elsewhere we use the UTF versions. Similar comments for the write APIs.. JobSubmissionProtocol: - Include HADOOP JIRA number in the comment related to version field. JobClient: - Usage prints: [-queueinfo <job-queue-name> [-showJobs] - this is missing a closing ']' - Return code should be set to 0 when the command syntax is found to be correct. - Since scheduler information is set to empty, it can never be null. I think in any case, it should print something like: Queue Name: default Scheduling Information: N/A - The line "Job List for the queue ::" needs a newline. Also, I think it can just read "Job list:" jobqueue_details.jsp: - Needs a backlink to the main jobtracker page - Needs a link to Hadoop web page - like in other pages. jobtracker.jsp: - The scheduling info column is not being split into rows. The HTML code generated does look fine. But still it is not showing up. Can you please check ? CapacityTaskScheduler: - Does not need supportsPriority as a separate field in the SchedulingInfo class. You can pick it up from one of the QueueSchedulingInfo objects. - guaranteedCapacity actual must be split between reduce and map slots. Currently, only the value for the map is being displayed. - Number of reclaimed resources is an internal variable and does not need to be displayed. - Rename getQSI to getQueueSchedulingInfo TestJobQueueInformation: - I think you can use JobClient, instead of directly dealing with JobSubmissionProtocol and having to duplicate the methods for createRPCProxy etc. Made modifications according to comments. I have also mentioned reasons why somethings are left as it is from previous version of the patch. JobTracker: - There's code being repeated in getAllJobs(), getAllJobs(String queue) and jobsToComplete. I think it should be factored out so changes to one of the methods (for e.g. to return a new field) need not be duplicated. Code repetition for converting collection JobInProgress to an array of JobStatus has been removed. Modified getAllJobs and getAllJobs(Queue). Left jobsToComplete as is. JobQueueInfo: - schedulingInfo stored here is a stringified version. I think it should be declared a String and get/set should deal with strings. The caller should basically call with actualObject.toString(). This makes it similar to JobStatus. The reason why we are using an object and passing only String over wire is because we are setting scheduling information only once. Then underlying reference of the scheduling information is updated by the respective TaskScheduler's and we do a toString() while passing over wire. This way we can avoid to constantly update the scheduling information in queue manager. For example check CapacityTaskScheduler.. It is using JSPHelper from the package to generate the percentage graph. Maybe that method should be moved into ServletUtil class in core util package. CapacityTaskScheduler: - Does not need supportsPriority as a separate field in the SchedulingInfo class. You can pick it up from one of the QueueSchedulingInfo objects. If a queue supports priority or not is stored by the JobQueueManager in capacity scheduler. The queue scheduling information object does not contain if a particular queue can support priority or not. So that is why there is a seperate field. TestJobQueueInformation: - I think you can use JobClient, instead of directly dealing with JobSubmissionProtocol and having to duplicate the methods for createRPCProxy etc. Reason why I am not using JobClient directly is because: by calling them we are going to call up display methods, if we call up display methods then we would have to parse the output of the job client and then do the test for equality. Moreover all the display method newly defined are private. If it is really required I can make them public then change test to parse the display string and test equality. Left jobsToComplete as is. I was thinking of something like: private JobStatus[] getJobStatus(Collection<JobInProgress> jips, boolean onlyRunning) { // .. if (onlyRunning) { // consider only jobs which are running or prep. } } Would that work ? Regarding tests, taking cue from APIs like getAllJobs, I think it is OK to provide wrapper APIs around the queue info related methods. These could be package private and the test case can directly access these. So, something like: JobQueueInfo[] getJobQueueInfos() { return jobSubmitClient.getJobQueueInfos(); } private void displayQueueList() { JobQueueInfo[] queues = getJobQueueInfos(); // } Agree with rest of your explanations. Actually going over some of the comments above, I see this comment from Owen: JobSubmissionProtocol is not public and therefore the JobClient needs identical methods. So, this agrees with what I've proposed above. In fact, we should make the APIs public and not package private. Looked at other changes. They seem fine to me. Except in jobqueue_details.jsp, there's a line coming at the end as follows: Hadoop, 2008. \ No newline at end of file > we should make the APIs public and not package private. That's the subject of HADOOP-3822. Let's not introduce new public APIs in this issue. Refactored getJobsToComplete, getAllJobs and _getAllJobs(queue) in JobTracker. Introduced two new package level methods for getting JobQueueInfo from JobClient and used the same in test case. Fixed No newline at end of file message I think we need to have a separate class for the queue command. Otherwise, users can cross the commands like: hadoop queue -kill job_00001 which would be confusing. With separate classes, we can only support the sub-commands that make sense. Attaching patch with following modification from the previous patch: - Added a new class called JobQueueClient which implements the Command Line Interface methods for JobQueue related operations. - Refactored percentageGraph from JspHelper in the namenode class to ServletUtil. - Removed dependency on JspHelper conf from mapred jsp pages. Incorporating comments from Hemanth: - Add javadoc for JobQueueClient - Refactor common code to a new class rather than using static methods in JobClient - Apache license header for JobQueueClient In addition to the above changes, following also has been modified according to Hemanth's comments: - Modified bin/hadoop usage to display information about new option(./hadoop queue) introduced. One nit. Previously, the JobTracker was setting the start time in the JobStatus for all the jobs. This is missing from the refactored code, and hence client is showing start time as 0. Other than that, it looks good to me. Fixing the start time issue, which was missed out while refactoring the code. This is getting close, but I have a few suggestions. When I asked you to split the queue queries out of JobClient, I didn't think about the API. I think the API is better in JobClient and JobQueueClient is only about the main that supports the cli commands. JobQueueClient shouldn't be a public class, because otherwise it ends up in the public API. So the API access is still through JobClient and JobQueueClient implements Tool, etc. Let's make JobSubmissionProtocol.getJobQueueInfos to getQueues(). getJobQueueInfo should be getQueueInfo(queueName). The methods in JobQueueClient should be public, moved to JobClient, and renamed: getAllQueueSchedulingInfo -> JobClient.getQueues() getAllJobs -> JobClient.getJobsFromQueue(queueName) getQueueSchedulingInfo -> JobClient.getQueueInfo(queueName) mapred.JSPUtil should not be public. Several of the new public API classes and methods are missing javadoc. JobQueueInfo.schedulerInfo should be a string, rather than an object. Since the serialization forces it to be a string, it should just be typed/stored that way. The QueueManager should probably have a map like: Map<String, Object> schedulerInfo; // map from queue name to scheduler specific object and just create the JobQueueInfo when the JobSubmissionProtocol methods are called. The constructor should take the two strings and don't bother with the setSchedulerInfo. I'm not very happy with ClientUtil. It seems like a weak abstraction. Is it really necessary, especially if you fold back into JobClient? Attaching patch with following changes: - Made JobQueueClient to default level access instead of public access. - Renamed methods in JobSubmissionProtocol to getQueues(),getQueueInfo(queueName) and getJobsFromQueue(queueName)_ - Mirrored these methods in JobClient. - Made change in JobQueueClient to use public methods in JobClient to use these methods for Job Queue querying. - Added javadoc for all Public classes and public methods introduced by the patch. - Removed the map which stored JobQueueInfo in QueueManager. - Changed type of SchedulingInfo in JobQueueInfo to String. - Constructing JobQueueInfo on fly when requested in QueueManager Adding Apache license to TestJobQueueInformation and JSPUtil according to Hemanth's comment. I am pasting inline the output of ant test-patch on my local machine. . ant test-core and ant test-patch did not generate any build failure, on today's trunk on my local machine I just committed this. Thanks, Sreekanth. Seems that there is a NPE in LimitTasksPerJobTaskScheduler. See HADOOP-4213. I think we need to first decide whether queues are explicit in this API or not. The problem with making queues explicit in the API is that every scheduler will have to support one, or at least a default one. But that's not so bad, IMO. getSchedulingInfo() should really return key-value pairs for queues, not for jobs. In the HADOOP-3445scheduler, for example, we need to display scheduling information associated with a queue - its capacity (both 'guaranteed' and 'allocated'), how many unique users have submitted jobs, how many tasks are running, how many are waiting. etc. This information is per queue, and doesn't make sense per job. I'd much rather have getSchedulingInfo() take in a queue name as a parameter, if we make queues explicit. In fact, I don't see what kind of scheduling information you'd associate with a job. Matei, do you have examples of what getSchedulingInfo would return for jobs? Similarly, getJobComparator() makes more sense when applied to a queue. In 3445, jobs are ordered per queue, and there is no global ordering. Furthermore, doesn't it make more sense to get a sorted collection of jobs, per queue, back from the scheduler, rather than a Comparator? Or are you imagining the UI and CLI to maintain a list of jobs all the time and then apply the comparator periodically?
https://issues.apache.org/jira/browse/HADOOP-3930?focusedCommentId=12625294&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2016-07
refinedweb
3,872
55.03
I started with distance estimator for Julia sets for the case of a super-attracting fixed point: \[ \delta = - \lim_{k \to \infty} \frac{|z_k - z_\infty| \log |z_k - z_\infty|}{|\frac{d}{dz}_k|} \] This formula is slightly different to the formula on the linked page, haven't worked out yet exactly why it works and what the significance of the differences are. Anyway, I wanted to apply it to Newton fractals for rational functions. Recall Newton's root finding method for a function \(G(z)\): \[ z_{k+1} = z_k - \frac{G(z_k)}{\frac{d}{dz}G(z_k)} \] If there are more than two roots of G, the boundary between regions that converge to different roots is a fractal. It's actually a Julia set for \(F(z)\) where \[ F(z) = z - \frac{G(z)}{\frac{d}{dz}G(z)} \] So we need to compute \(F^k(z)\) and\(\frac{d}{dz}F^k(z)\) for the distance estimate. By the product rule for derivatives, the derivative is the product of the derivatives at each step. It turns out that the actual calculations are very simple. Here's the derivation: \[ \begin{aligned} F(z) &= z - \frac{G(z)}{\frac{d}{dz}G(z)} \\ \frac{d}{dz} F(z) &= 1 - (\frac{d}{dz} G(z) \frac{1}{\frac{d}{dz} G(z)} + G(z) \frac{d}{dz} \frac{1}{\frac{d}{dz} G(z)}) \\ &= 1 - (1 + G(z) \frac{- \frac{d}{dz}\frac{d}{dz} G(z)}{(\frac{d}{dz} G(z))^2})\\ &= \frac{G(z) \frac{d}{dz}\frac{d}{dz} G(z)}{(\frac{d}{dz} G(z))^2} \end{aligned} \] As \(\frac{d}{dz}F(z)\) has a factor \(G(z)\), and iterations of F(z) converge to a root \(z_\infty\) where \(G(z_\infty) = 0\), the roots are super-attracting fixed points. Now, \(G(z)\) is a rational function: \[ G(z) = \frac{P(z)}{Q(z)} = \frac{\prod_{i} (z - p_i)^{P_i}}{\prod_{j} (z - q_j)^{Q_j}} \] We need to compute \(F\) and \(\frac{d}{dz}F\), and happily this doesn't need the calculation of all of \(G\), \(\frac{d}{dz}G\) and \(\frac{d}{dz}\frac{d}{dz}G\), because lots of terms cancel each other out: \[ \begin{aligned} \frac{d}{dz} G(z) &= \frac{ Q(z) \frac{d}{dz} P(z) - P(z) \frac{d}{dz} Q(z) }{ (Q(z))^2 } \\ \frac{d}{dz} P(z) &= \sum_I{(P_I (z - p_I)^{P_I - 1} \prod_{i \ne I} (z - p_i)^P_i)} \\ &= (\prod_i{ (z-p_i)^P_i }) (\sum_i{ \frac{P_i}{z - p_i}}) \\ &= (\sum_i{ \frac{P_i}{z - p_i}}) P(z) \\ \frac{d}{dz} Q(z) &= (\sum_j{ \frac{Q_j}{z - q_j}}) Q(z) \\ \frac{G(z)}{\frac{d}{dz}G(z)} &= \frac{\frac{P(z)}{Q(z)}}{\frac{Q(z)(\sum_i{ \frac{P_i}{z - p_i}})P(z) - P(z) (\sum_j{ \frac{Q_j}{z - q_j}}) Q(z)}{(Q(z))^2}}) \\ &= (P / Q) / ((P Q (\sum_P) - P Q (\sum_Q)) / (Q Q)) \\ &= 1 / ((\sum_P) - (\sum_Q)) \\ F(z) &= z - \frac{1}{(\sum_i{ \frac{P_i}{z - p_i} }) - (\sum_j{ \frac{Q_j}{z - q_j}})} \\ \frac{d}{dz} F(z) &= 1 + \frac{ (\sum_j{ \frac{Q_j}{(z - q_j)^2}}) - (\sum_i{ \frac{P_i}{(z - p_i)^2} }) }{((\sum_i{ \frac{P_i}{z - p_i} }) - (\sum_j{ \frac{Q_j}{z - q_j}}))^2} \end{aligned} \] where the derivation of the last line is left as an exercise (in other words, I couldn't be bothered to type up all the pages of equations I scribbled on paper). Putting it into code, here's the algorithm in C99: #include <complex.h> #include <math.h> typedef unsigned int N; typedef double R; typedef double complex C; R // OUTPUT the distance estimate distance ( C z0 // INPUT starting point , N nzero // INPUT number of zeros , const C *zero // INPUT the zeros , const C *zerop // INPUT the power of each zero , N npole // INPUT number of poles , const C *pole // INPUT the poles , const C *polep // INPUT the power of each pole , N *which // OUTPUT the index of the zero converged to ) { C z = z0; C dz = 1.0; R eps = 0.1; // root radius, should be as large as possible for (N k = 0; k < 1024; ++k) { // fixed iteration limit for (N i = 0; i < nzero; ++i) { // check if converged R e = cabs(z - zero[i]); if (e < eps) { *which = i; return e * -log(e) / cabs(dz); // compute distance } } C sz = 0.0; C sz2 = 0.0; for (N i = 0; i < nzero; ++i) { C d = z - zero[i]; sz += zerop[i] / d; sz2 += zerop[i] / (d * d); } C sp = 0.0; C sp2 = 0.0; for (N j = 0; j < npole; ++j) { C d = z - pole[j]; sp += polep[j] / d; sp2 += polep[j] / (d * d); } C d = sz - sp; z -= 1.0 / d; dz *= (sp2 - sz2) / (d * d) + 1.0; } *which = nzero; return -1; // didn't converge } complete C99 source code for distance estimated Newton fractals.
https://mathr.co.uk/blog/2013-06-22_distance_estimation_for_newton_fractals.html
CC-MAIN-2017-47
refinedweb
808
57.74
Hi Gunnar, > I'm working on a slice plane module. I've created a Gridded2DSet > and its points to extract the values from the input field. I got a tip a > while back that the method evaluate in Function could be used to > interpolate values. I could create RealTuples for Unfinished sentence here, so I'm not sure what role your RealTuples play. For creating a slice plane through a 3-D grid, create a Gridded3DSet with manifold dimension = 2. Then pass it to the resample() method of your FlatField, which handles interpolation. The attached Python program provides a nice example. > The problem with this approach is that it requires massive object > generation. RealTuple objects are immutable, which means that each new > lookup would require a new RealTuple for each point. > > In addition, I'm guessing that the Data object returned is a new one for > each method call. Also, casting back and forth costs a bit. Well, a single call to resample() produces a slice. That slice is a new FlatField, but its only one per slice. Not so bad. > This approach is not very preferrable. I thought about implementing a > specialiced case for each set type, where I locate the cells the points > are in and interpolate manually, but it would require a separate > implementation for each set type, which in my opinion is not good design. The resample() method uses the Set.valueToInterp() method for interpolation. There are a variety of implementations in the Set class hierarchy, although not a different one for each Set class. Bottom line: I recommend to use the resample() method for making slices. Cheers, Bill ---------------------------------------------------------- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 hibbard@xxxxxxxxxxxxxxxxx 608-263-4427 fax: 608-263-6738 from visad.python.JPythonMethods import * import subs from visad import * from visad.util import VisADSlider # load our neep602_mystery data set, available from # a = load("neep602_mystery") # get one 3-D grid from the data set, in this case the 8-th grid = a[7] # get the type (i.e., schema) information for the 3-D grid d = domainType(grid) r = rangeType(grid) # get the spatial sampling of the 3-D grid set = getDomain(grid) # assuming the spatial sampling is rectangular, get its factors xset = set.getX() yset = set.getY() zset = set.getZ() # get units, coordinate system and errors for the sampling units = set.getSetUnits() cs = set.getCoordinateSystem() errors = set.getSetErrors() # get the actual x, y and z values of the spatial sampling xv = xset.getSamples()[0] yv = yset.getSamples()[0] zv = zset.getSamples()[0] # create a display with mappings for our grid maps = subs.makeMaps(d[0], "y", d[1], "x", d[2], "z", r, "rgb") maps[2].setRange(zv[0], zv[-1]) display = subs.makeDisplay(maps) # create an interactive slider for choosing a grid level level = DataReferenceImpl("height") slider = VisADSlider("level", int(1000.0 * zv[0]), int(1000.0 * zv[-1]), int(1000.0 * zv[0]), 0.001, level, d[2]) # define a function for extracting a grid level at height 'z' def makeSlice(z): # initialize arrays for a 2-D grid at height 'z' xs = [] ys = [] zs = [] # loops for the x and y values from the original 3-D grid for y in yv: for x in xv: # set x and y locations for the 2-D grid xs.append(x) ys.append(y) # set constant z heights in the 2-D grid zs.append(z) # or, for a curved slice, use this instead: # zs.append(z + 0.04 * ((x-xv[9])*(x-xv[9])+(y-yv[9])*(y-yv[9]))) # create a 2-D grid embedded at height 'z' in 3-D space slice_set = Gridded3DSet(d, [xs, ys, zs], len(xv), len(yv), cs, units, errors) # resample our original 3-D grid to the 2-D grid return grid.resample(slice_set) # add an initial slice to the display slice = subs.addData("slice", makeSlice(zv[0]), display) # a little program to run whenever the user moves the slider # it displays a 2-D grid at the height defined by the slider class MyCell(CellImpl): def doAction(this): z = level.getData().getValue() slice.setData(makeSlice(z)) # connect the slider to the little program cell = MyCell(); cell.addReference(level) # turn on axis scales in the display showAxesScales(display, 1) # show the display on the screen, along with the slider subs.showDisplay(display, top=slider) # ordinary plot of the 3-D grid for comparison plot(grid) visadlist information: visadlist visadarchives:
http://www.unidata.ucar.edu/mailing_lists/archives/visad/2002/msg00231.html
CC-MAIN-2015-06
refinedweb
740
66.94
The C library function double atan2(double y, double x) returns the arc tangent in radians of y/x based on the signs of both values to determine the correct quadrant. Following is the declaration for atan2() function. double atan2(double y, double x) x − This is the floating point value representing an x-coordinate. y − This is the floating point value representing a y-coordinate. This function returns the principal arc tangent of y/x, in the interval [-pi,+pi] radians. The following example shows the usage of atan2() function. #include <stdio.h> #include <math.h> #define PI 3.14159265 int main () { double x, y, ret, val; x = -7.0; y = 7.0; val = 180.0 / PI; ret = atan2 (y,x) * val; printf("The arc tangent of x = %lf, y = %lf ", x, y); printf("is %lf degrees\n", ret); return(0); } Let us compile and run the above program that will produce the following result − The arc tangent of x = -7.000000, y = 7.000000 is 135.000000 degrees
https://www.tutorialspoint.com/c_standard_library/c_function_atan2
CC-MAIN-2019-35
refinedweb
170
69.18
import "github.com/advancedlogic/kite" Package kite is a library for creating micro-services. Two main types implemented by this package are Kite for creating a micro-service server called "Kite" and Client for communicating with another kites. kontrolclient implements a kite.Client for interacting with Kontrol kite. Package server implements a HTTP(S) server for kites. client.go errors.go handlers.go heartbeat.go kite.go kontrolclient.go logger.go method.go registerurl.go request.go server.go tokenrenewer.go Returned from GetKites when query matches no kites. type Auth struct { // Type can be "kiteKey", "token" or "sessionID" for now. Type string `json:"type"` Key string `json:"key"` } Authentication is used when connecting a Client. type Client struct { // The information about the kite that we are connecting to. protocol.Kite // A reference to the current Kite running. LocalKite *Kite // Credentials that we sent in each request. Auth *Auth // Should we reconnect if disconnected? Reconnect bool // SockJS base URL URL string // Should we process incoming messages concurrently or not? Default: true Concurrent bool // ClientFunc is called each time new sockjs.Session is established. // The session will use returned *http.Client for HTTP round trips // for XHR transport. // // If ClientFunc is nil, sockjs.Session will use default, internal // *http.Client value. ClientFunc func(*sockjsclient.DialOptions) *http.Client // ReadBufferSize is the input buffer size. By default it's 4096. ReadBufferSize int // WriteBufferSize is the output buffer size. By default it's 4096. WriteBufferSize int // contains filtered or unexported fields } Client is the client for communicating with another Kite. It has Tell() and Go() methods for calling methods sync/async way. Dial connects to the remote Kite. Returns error if it can't. Dial connects to the remote Kite. If it can't connect, it retries indefinitely. It returns a channel to check if it's connected or not. DialTimeout acts like Dial but takes a timeout. Go makes an unblocking method call to the server. It returns a channel that the caller can wait on it to get the response. func (c *Client) GoWithTimeout(method string, timeout time.Duration, args ...interface{}) chan *response GoWithTimeout does the same thing with Go() method except it takes an extra argument that is the timeout for waiting reply from the remote Kite. If timeout is given 0, the behavior is same as Go(). OnConnect registers a function to run on connect. OnDisconnect registers a function to run on disconnect. Tell makes a blocking method call to the server. Waits until the callback function is called by the other side and returns the result and the error. func (c *Client) TellWithTimeout(method string, timeout time.Duration, args ...interface{}) (result *dnode.Partial, err error) TellWithTimeout does the same thing with Tell() method except it takes an extra argument that is the timeout for waiting reply from the remote Kite. If timeout is given 0, the behavior is same as Tell(). type Error struct { Type string `json:"type"` Message string `json:"message"` CodeVal string `json:"code"` } Error is the type of the kite related errors returned from kite package. Objects implementing the Handler interface can be registered to a method. The returned result must be marshalable with json package. HandlerFunc is a type adapter to allow the use of ordinary functions as Kite handlers. If h is a function with the appropriate signature, HandlerFunc(h) is a Handler object that calls h. func (h HandlerFunc) ServeKite(r *Request) (interface{}, error) ServeKite calls h(r) type Kite struct { Config *config.Config // Log logs with the given Logger interface Log Logger // SetLogLevel changes the level of the logger. Default is INFO. SetLogLevel func(Level) // Contains different functions for authenticating user from request. // Keys are the authentication types (options.auth.type). Authenticators map[string]func(*Request) error // ClientFunc is used as the default value for kite.Client.ClientFunc. // If nil, a default ClientFunc will be used. // // See also: kite.Client.ClientFunc docstring. ClientFunc func(*sockjsclient.DialOptions) *http.Client // MethodHandling defines how the kite is returning the response for // multiple handlers MethodHandling MethodHandling TLSConfig *tls.Config Id string // Unique kite instance id // contains filtered or unexported fields } Kite defines a single process that enables distributed service messaging amongst the peers it is connected. A Kite process acts as a Client and as a Server. That means it can receive request, process them, but it also can make request to other kites. Do not use this struct directly. Use kite.New function, add your handlers with HandleFunc mehtod, then call Run method to start the inbuilt server (or pass it to any http.Handler compatible server) New creates, initialize and then returns a new Kite instance. Version must be in 3-digit semantic form. Name is important that it's also used to be searched by others. AuthenticateFromKiteKey authenticates user from kite key. AuthenticateFromToken is the default Authenticator for Kite. AuthenticateSimpleKiteKey authenticates user from the given kite key and returns the authenticated username. It's the same as AuthenticateFromKiteKey but can be used without the need for a *kite.Request. Close stops the server and the kontrol client instance.. GetKites returns the list of Kites matching the query. The returned list contains Ready to connect Client instances. The caller must connect with Client.Dial() before using each Kite. An error is returned when no kites are available. GetToken is used to get a new token for a single Kite. Handle registers the handler for the given method. The handler is called when a method call is received from a Kite. func (k *Kite) HandleFunc(method string, handler HandlerFunc) *Method HandleFunc registers a handler to run when a method call is received from a Kite. It returns a *Method option to further modify certain options on a method call HandleHTTP registers the HTTP handler for the given pattern into the underlying HTTP muxer. HandleHTTPFunc registers the HTTP handler for the given pattern into the underlying HTTP muxer. Kite returns the definition of the kite. KontrolReadyNotify returns a channel that is closed when a successful registiration to kontrol is done. NewClient returns a pointer to a new Client. The returned instance is not connected. You have to call Dial() or DialForever() before calling Tell() and Go() methods. NewKeyRenewer renews the internal key every given interval OnDisconnect registers a function to run when a connected Kite is disconnected. OnFirstRequest registers a function to run when a Kite connects to this Kite. Port returns the TCP port number that the kite listens. Port must be called after the listener is initialized. You can use ServerReadyNotify function to get notified when listener is ready. Kite starts to listen the port when Run() is called. Since Run() is blocking you need to run it as a goroutine the call this function when listener is ready. Example: k := kite.New("x", "1.0.0") go k.Run() <-k.ServerReadyNotify() port := k.Port() PostHandle registers an handler which is executed after a kite.Handler method is executed. Calling PostHandler multiple times registers multiple handlers. A non-error return triggers the execution of the next handler. The execution order is FIFO. func (k *Kite) PostHandleFunc(handler HandlerFunc) PostHandleFunc is the same as PostHandle. It accepts a HandlerFunc. PreHandle registers an handler which is executed before a kite.Handler method is executed. Calling PreHandle multiple times registers multiple handlers. A non-error return triggers the execution of the next handler. The execution order is FIFO. func (k *Kite) PreHandleFunc(handler HandlerFunc) PreHandleFunc is the same as PreHandle. It accepts a HandlerFunc. RSAKey returns the corresponding public key for the issuer of the token. It is called by jwt-go package when validating the signature in the token. Register registers current Kite to Kontrol. After registration other Kites can find it via GetKites() or WatchKites() method. This method does not handle the reconnection case. If you want to keep registered to kontrol, use RegisterForever(). RegisterForever is equilavent to Register(), but it tries to re-register if there is a disconnection. The returned error is for the first register attempt. It returns nil if ReadNotify() is ready and it's registered succesfull. RegisterHTTP registers current Kite to Kontrol. After registration other Kites can find it via GetKites() or WatchKites() method. It registers again if connection to kontrol is lost. RegisterHTTPForever is just like RegisterHTTP however it first tries to register forever until a response from kontrol is received. It's useful to use it during app initializations. After the registration a reconnect is automatically handled inside the RegisterHTTP method. RegisterToProxy is just like RegisterForever but registers the given URL to kontrol over a kite-proxy. A Kiteproxy is a reverseproxy that can be used for SSL termination or handling hundreds of kites behind a single. This is a blocking function.. RegisterURL returns a URL that is either local or public. It's an helper method to get a Registration URL that can be passed to Kontrol (via the methods Register(), RegisterToProxy(), etc.) It needs to be called after all configurations are done (like TLS, Port,etc.). If local is true a local IP is used, otherwise a public IP is being used. Run is a blocking method. It runs the kite server and then accepts requests asynchronously. It supports graceful restart via SIGUSR2. ServeHTTP helps Kite to satisfy the http.Handler interface. So kite can be used as a standard http server. SetupKontrolClient setups and prepares a the kontrol instance. It connects to kontrol and reconnects again if there is any disconnections. This method is called internally whenever a kontrol client specific action is taking. However if you wish to connect earlier you may call this method. SetupSignalHandler listens to signals and toggles the log level to DEBUG mode when it received a SIGUSR2 signal. Another SIGUSR2 toggles the log level back to the old level. func (k *Kite) TellKontrolWithTimeout(method string, timeout time.Duration, args ...interface{}) (result *dnode.Partial, err error) TellKontrolWithTimeout is a lower level function for communicating directly with kontrol. Like GetKites and GetToken, this automatically sets up and connects to kontrol as needed. Trust a Kontrol key for validating tokens. Logging levels. type Logger interface { // Fatal logs to the FATAL, ERROR, WARNING, INFO and DEBUG levels, // including a stack trace of all running goroutines, then calls // os.Exit(1). Fatal(format string, args ...interface{}) // Error logs to the ERROR, WARNING, INFO and DEBUG level. Error(format string, args ...interface{}) // Warning logs to the WARNING, INFO and DEBUG level. Warning(format string, args ...interface{}) // Info logs to the INFO and DEBUG level. Info(format string, args ...interface{}) // Debug logs to the DEBUG level. Debug(format string, args ...interface{}) } Logger is the interface used to log messages in different levels. Method defines a method and the Handler it is bind to. By default "ReturnMethod" handling is used. DisableAuthentication disables authentication check for this method. PostHandle adds a new kite handler which is executed after the method. func (m *Method) PostHandleFunc(handler HandlerFunc) *Method PostHandlerFunc adds a new kite handlerfunc which is executed before the method. PreHandler adds a new kite handler which is executed before the method. func (m *Method) PreHandleFunc(handler HandlerFunc) *Method PreHandlerFunc adds a new kite handlerfunc which is executed before the method. Throttle throttles the method for each incoming request. The throttle algorithm is based on token bucket implementation:. Rate determines the number of request which are allowed per frequency. Example: A capacity of 50 and fillInterval of two seconds means that initially it can handle 50 requests and every two seconds the bucket will be filled with one token until it hits the capacity. If there is a burst API calls, all tokens will be exhausted and clients need to be wait until the bucket is filled with time. For example to have throttle with 30 req/second, you need to have a fillinterval of 33.33 milliseconds. MethodHandling defines how to handle chaining of kite.Handler middlewares. An error breaks the chain regardless of what handling is used. Note that all Pre and Post handlers are executed regardless the handling logic, only the return paramater is defined by the handling mode. const ( // ReturnMethod returns main method's response. This is the standard default. ReturnMethod MethodHandling = iota // ReturnFirst returns the first non-nil response. ReturnFirst // ReturnLatest returns the latest response (waterfall behaviour) ReturnLatest ) type Request struct { // Method defines the method name which is invoked by the incoming request Method string // Args defines the incoming arguments for the given method Args *dnode.Partial // LocalKite defines a context for the local kite LocalKite *Kite // Client defines a context for the remote kite Client *Client // Username defines the username which the incoming request is bound to. // This is authenticated and validated if authentication is enabled. Username string // Auth stores the authentication information for the incoming request and // the type of authentication. This is not used when authentication is disabled Auth *Auth // Context holds a context that used by the current ServeKite handler. Any // items added to the Context can be fetched from other handlers in the // chain. This is useful with PreHandle and PostHandle handlers to pass // data between handlers. Context cache.Cache } Request contains information about the incoming request. Response is the type of the object that is returned from request handlers and the type of only argument that is passed to callback functions. TokenRenewer renews the token of a Client just before it expires. func NewTokenRenewer(r *Client, k *Kite) (*TokenRenewer, error) func (t *TokenRenewer) RenewWhenExpires() RenewWhenExpires renews the token before it expires. Package kite imports 40 packages (graph). Updated 2017-05-18. Refresh now. Tools for package owners. This is a dead-end fork (no commits since the fork).
https://godoc.org/github.com/advancedlogic/kite
CC-MAIN-2018-26
refinedweb
2,270
60.11
AccessGranted is a multi-role and whitelist based authorization gem for Rails. And it's lightweight (~300 lines of code)! Add the gem to your gemfile: gem 'access-granted', '~> 1.1.0' Run the bundle command to install it. Then run the generator: rails generate access_granted:policy Add the policies (and roles if you're using it to split up your roles into files) directories to your autoload paths in application.rb: config.autoload_paths += %W(#{config.root}/app/policies #{config.root}/app/roles) Because it has zero runtime dependencies it is guaranteed to work on all major Ruby versions MRI 2.0 - 2.5, Rubinius >= 2.X and JRuby >= 1.7. AccessGranted is meant as a replacement for CanCan to solve major problems: Performance On average AccessGranted is 20 times faster in resolving identical permissions and takes less memory. See benchmarks. Roles Adds support for roles, so no more ifs and elses in your Policy file. This makes it extremely easy to maintain and read the code. Whitelists This means that you define what the user can do, which results in clean, readable policies regardless of application complexity. You don't have to worry about juggling cans and cannots in a very convoluted way! Note: cannot is still available, but has a very specifc use. See Usage below. Framework agnostic Permissions can work on basically any object and AccessGranted is framework-agnostic, but it has Rails support out of the box. :) It does not depend on any libraries, pure and clean Ruby code. Guaranteed to always work, even when software around changes. Roles are defined using blocks (or by passing custom classes to keep things tidy). Order of the roles is VERY important, because they are being traversed in top-to-bottom order. At the top you must have an admin or some other important role giving the user top permissions, and as you go down you define less-privileged roles. I recommend starting your adventure by reading my blog post about AccessGranted, where I demonstrate its abilities on a real life example. Let's start with a complete example of what can be achieved: # app/policies/access_policy.rb class AccessPolicy include AccessGranted::Policy def configure # The most important admin role, gets checked first role :admin, { is_admin: true } do can :manage, Post can :manage, Comment end # Less privileged moderator role role :moderator, proc {|u| u.moderator? } do can [:update, :destroy], Post can :update, User end # The basic role. Applies to every user. role :member do can :create, Post can [:update, :destroy], Post do |post, user| post.author == user && post.comments.empty? end end end end Each role method accepts the name of the role you're creating and an optional matcher. Matchers are used to check if the user belongs to that role and if the permissions inside should be executed against it. The simplest role can be defined as follows: role :member do can :read, Post can :create, Post end This role will allow everyone (since we didn't supply a matcher) to read and create posts. But now we want to let admins delete those posts (for example spam posts). In this case we create a new role above the :member to add more permissions for the admin: role :admin, { is_admin: true } do can :destroy, Post end role :member do can :read, Post can :create, Post end The { is_admin: true } hash is compared with the user's attributes to see if the role should be applied to it. So, if the user has an attribute is_admin set to true, then the role will be applied to it. Note: you can use more keys in the hash to check many attributes at once. Hashes can be used as matchers to check if an action is permitted. For example, we may allow users to only see published posts, like this: role :member do can :read, Post, { published: true } end Sometimes you may need to dynamically check for ownership or other conditions, this can be done using a block condition in can method, like so: role :member do can :update, Post do |post, user| post.author_id == user.id end end When the given block evaluates to true, then user is given the permission to update the post. Additionally, we can allow admins to update all posts despite them not being authors like so: role :admin, { is_admin: true } do can :update, Post end role :member do can :update, Post do |post, user| post.author_id == user.id end end As stated before: :admin role takes precedence over :member role, so when AccessGranted sees that admin can update all posts, it stops looking at the less important roles. That way you can keep a tidy and readable policy file which is basically human readable. AccessGranted comes with a set of helpers available in Ruby on Rails apps: class PostsController def show @post = Post.find(params[:id]) authorize! :read, @post end def create authorize! :create, Post # (...) end end authorize! throws an exception when current_user doesn't have a given permission. You can rescue from it using rescue_from: class ApplicationController < ActionController::Base rescue_from "AccessGranted::AccessDenied" do |exception| redirect_to root_path, alert: "You don't have permission to access this page." end end You can also extract the action and subject which raised the error, if you want to handle authorization errors differently for some cases: rescue_from "AccessGranted::AccessDenied" do |exception| status = case exception.action when :read # invocation like `authorize! :read, @something` 403 else 404 end body = case exception.subject when Post # invocation like `authorize! @some_action, Post` "failed to access a post" else "failed to access something else" end end You can also have a custom exception message while authorizing a request. This message will be associated with the exception object thrown. class PostsController def show @post = Post.find(params[:id]) authorize! :read, @post, 'You do not have access to this post' render json: { post: @post } rescue AccessGranted::AccessDenied => e render json: { error: e.message }, status: :forbidden end end To check if the user has a permission to perform an action, use the can? and cannot? methods. Example: class UsersController def update # (...) # only admins can elevate users to moderator status if can? :make_moderator, @user @user.moderator = params[:user][:moderator] end # (...) end end Usually you don't want to show "Create" buttons for people who can't create something. You can hide any part of the page from users without permissions like this: # app/views/categories/index.html.erb <% if can? :create, Category %> <%= link_to "Create new category", new_category_path %> <% end %> By default, AccessGranted adds this method to your controllers: def current_policy @current_policy ||= ::AccessPolicy.new(current_user) end If you have a different policy class or if your user is not stored in the current_user variable, then you can override it in any controller and modify the logic as you please. You can even have different policies for different controllers! Initialize the Policy class: policy = AccessPolicy.new(current_user) Check the ability to do something: with can?: policy.can?(:create, Post) #=> true policy.can?(:update, @post) #=> false or with cannot?: policy.cannot?(:create, Post) #=> false policy.cannot?(:update, @post) #=> true Let's say your app is getting bigger and more complex. This means your policy file is also getting longer. Below you can see an extracted :member role: class AccessPolicy include AccessGranted::Policy def configure role :administrator, is_admin: true do can :manage, User end role :member, MemberRole, lambda { |user| !u.guest? } end end And roles should look like this: # app/roles/member_role.rb class MemberRole < AccessGranted::Role def configure can :create, Post can :destroy, Post do |post, user| post.author == user end end end This gem has been created as a replacement for CanCan and therefore it requires minimum work to switch. AccessGranted does not extend ActiveRecord in any way, so it does not have the accessible_by? method which could be used for querying objects available to current user. This was very complex and only worked with permissions defined using hash conditions, so I decided to not implement this functionality as it was mostly ignored by CanCan users. Both can?/ cannot? and authorize! methods work in Rails controllers and views, just like in CanCan. The only change you have to make is to replace all can? :manage, Class with the exact action to check against. can :manage is still available for defining methods and serves as a shortcut for defining :create, :read, :update, :destroy all in one line. Syntax for defining permissions in the AccessPolicy file (Ability in CanCan) is exactly the same, with roles added on top. See Usage above. git checkout -b my-new-feature) git commit -am 'Add some feature') git push origin my-new-feature)
https://awesomeopensource.com/project/chaps-io/access-granted
CC-MAIN-2021-21
refinedweb
1,436
55.54
ReactJS simple example The best way to learn a new Javascript library is to write a simple bare-bones example. In this post I have a written a simple ReactJS example that displays the classic ‘Hello World’ message. Firstly, we have our basic HTML5 structure including a div tag with the id ‘container’. The external Javascript files included are: the React library itself, and the React-dom library which provides the DOM specific methods. The empty script tag is where we will add the ReactJS code. <!doctype html> <html> <head> <meta charset="UTF-8"> <title>React Example</title> </head> <body> <div id=“container”></div> <script src=""></script> <script src=""></script> <script> //Code goes here </script> </body> </html> Inside of the empty script tag add the following code below. React is built on the idea of components. Essential everything in a React app is part of a component. Our main component in the example is the App component. A component is created using the createClass method. Every React component must have a render() method which states the HTML to be added to the page. The ReactDOM render() method adds the content from the App component to the div with the id ‘container’. The first argument is the App component and the second argument is a reference to the div. var App = React.createClass({ render:function(){ return ( <div> <p>Hello World</p> </div> ); } }); ReactDOM.render(<App />, document.getElementById('container')); You may have noticed the HTML content is inside the Javascript code. This is not official HTML, but is a new React concept called JSX which is basically a Javascript extension that allows you to write XML style tags. At runtime the HTML and Javascript get added to the DOM together. We can also nest multiple components inside of the App component. In the following example I have created a ‘Hello World’ Component and added two instances inside of the App component. var HelloWorld = React.createClass({ render:function(){ return ( <p>Hello World</p> ) } }); var App = React.createClass({ render:function(){ return ( <div> <HelloWorld /> <HelloWorld /> </div> ); } }); ReactDOM.render(<App />, document.getElementById('container'));
http://www.ilike2flash.com/2015/12/reactjs-simple-example.html
CC-MAIN-2017-43
refinedweb
347
57.37
Go Unanswered | Answered Fireplaces ~1400 answered questions Parent Category: Heating Ventilating and Air Conditioning A place in a home connected to a chimney where logs of wood are burnt in order to heat a room 1 2 3 ... 14 > How do you protect kids from a wood stove that will not be in use without spending a fortune? There isn't a big danger if it isn't in use. The only danger would be if they fell and hit themselves on it. You'll just have to watch them carefully around the wood stove and teach them not to run around it and to be careful. Kids can learn at a really young age the proper behavior around different… Popularity: 137 How do you fix a gas leak? First off where is it? If it is in line then you can cut that section off and replace it with rubber fuel line of equal size. Popularity: 108 How do you hook up a natural gas appliance? It is easiest if the gas line and the electrical outlet are already set up in your home. At that point, it is easy to set up the appliance. When hooking up a new or used gas appliance it is required that you use a new gas line. I always try to use a flexible line and make sure it is extra long so … Popularity: 153 How do you repair a heat duct? Duck Tape Go to a home depot or lowes and buy aluminum peel and stick tape for a perm. repair Popularity: 42 How do you get rid of a bee hive in the chimney of a fireplace? First of all DO NOT start a fire in the fireplace to drive the bees out. The bees could have damaged the lining of the fireplace creating a home fire hazard or created a nest that is blocking the flue and this will back smoke up into the house. Step 1. Install a chimney balloon or inflatable damper… Popularity: 38 How do you clean melted fabric off the bottom of your iron? We used to keep a block of paraffin wax wrapped in cheescloth. Once in awhile, we would run the hot iron over the wax and wipe off the residue. This resulted in a slick, clean iron. To remove melted plastic and fabrics, heat at low setting just until material softens, and scrape off with a smooth … Popularity: 209 How do you clean brick on a fireplace surround? First, brush off the loose ashes. Spray the brick with water. Then, mix a mild cleaner with water and use that to clean off the soot. Popularity: 228 What could cause a constant draft coming in your fireplace and how can you stop it? Answer light a fire CHECK AND SEE IF AN ANIMAL OR BIRD HAS BUILT NEST WHICH IS KEEPING YOUR CHIMNEY'S FLUE OPEN. Answer You need to close the damper. look inside the flute 4 handle,and push or pull. Answer It could either be your Flue is stuck open slightly or you have a crack in the chimney...eithe… Popularity: 89 effective heat producer for your home. Here are a couple of items that may help, but keep in mind fireplaces by design tend to leak interior air when they are dormant and a… Popularity: 37 clean a cast iron fireplace? Cast iron is a very hard material, and the best way to clean it is with a stiff wire brush. These can be purchased for around one or �1 or �2 ($2.00 or $3.00). If the fireplace is "in situ" you should be able to clean most of the visible surfaces easily. If the fireplace is "out" and you can … Popularity: 35 Can you build a deck over a septic field? Building over a Septic Drainfield or Leachfield Septic drainfields and an equal sized replacement area should ideally be located in an area that receives little or no surface disturbance. For example they can't be under driveways, or in a pasture where the ground might be compacted by livestock o… Popularity: 71 What would cause a chimney cap to clog? Bird Nests, Straw, Grass, E.T.C Becuase If you don't have the screen on the cap the birds will make a nest in there because it is warm. You can but the screen or cover, at any hardaware store, or building Centre. Birdnests, Because birds like to nest in there because it is warm. you can buy a… Popularity: 20 How do you clean a burned iron? Magic eraser Popularity: 2 How much is a fair price to build a 16 foot by 16 foot deck with pressure treated wood? Answer $400 in parts and $2-300 for labor I live in the southeastern U.S. and you wont get a deck built here for that price. You need to consider the height from the ground,do you want hand rails around it ? In some counties and cities you have to put hand rails on it if it,s so many inches off … Popularity: 17 How do you open an indoor fireplace flue damper? Answer There are many kinds of dampers, depending upon how old the house is and the choice by the masons. Many dampers have a hinge on the rear and a handle that can be pushed or lifted to lift the front and latch it open. Others have a lever designed to move with a knob or chain connected to th… Popularity: 57 Does a fireplace need to be in working order at time of home sale in Fairfax County Virginia Fireplace was not listed as is and was represented as a working fireplace? Answer Virginia is a Caveat Emptor state (Buyer Beware). Basically, the obligations of a seller are established by the contract for sale, not by legislation. The standard Northern Virginia contract does require all plumbing and all appliances to be functioning (paragraph 7). Also, there is a para… Popularity: 13 What is a damper? damper is a Australian bread.That is a type of food from Australia. Popularity: 24 Can fireplace be placed inside a gazebo? A fireplace can be placed inside of a gazebo. There are several plans available. The trick is finding the right fireplace design to fit your gazebo and building it properly as not to present a hazard. Popularity: 1 How do you disassemble a Winchester ranger 120 1 for cleaning? Hi, I have owned an operated a 120 for about 22 years and they are awesome and are very dependable shotguns. I purchased mine from Smoke and Gun in <?xml:namespace prefix = st1<st1:place w:<st1:City w:Waukegan<… Popularity: 6 Should you close the flue to the fireplace to keep the humidity out of the house? The purpose of the fireplace damper is to keep the outside elements, outside. The damper is opened only when there is a fire in the fireplace. Popularity: 1 How much does it cost to add freon? Answer I just had the A/C guy do my old residential a/c system and he charged me $125 and added 4 pounds of Freon 22. That included the service call. It took him about 10 -15 minutes. But like they say, it ain't what you do it's what you know. This was 6/25/2007 in Boise Idaho. He said he was ex… Popularity: 6 Is willow a good wood to burn in a fireplace? Though willow is considered a hardwood, it doesn't burn as well as some other hardwoods that burn well in fireplaces, such as apple, beech, eucalyptus, hickory, maple, oak, and more. Additional response - It depends upon what you mean by good. Seasoned (dried) Willow will burn just fine in your… Popularity: 34 What happens when charcoal is sprinkled in flame? Sprinkling charcoal in a flame will cause the flame to change colors. This is apparent in fireworks displays, which routinely use charcoal. Popularity: 1 Should you install a flat screen tv over the fireplace? no because the heat will affect the electronics in the tv Popularity: 13 Remove paint from cast iron fireplace? Well I had the same problem before. A little bleach fixed the problem --------- It depends on the kind of paint that's on the metal. Latex paint will come off a number of ways: 1. Warm water and dish soap with a VERY soft scrubbing pad often works. 2. Rubbing alcohol or ammonia will also work a… Popularity: 4 Should you close ac vents in rooms that are not used Should you also shut the doors How does this help if the air from the ac still goes to that room vent? look for itYou are on the right track. and if for long term - remove the vent cover and stuff old clothes in there to stop all air movement. Seal up the gap under the room door and if it has a return air duct in the room seal it up also. You will then have more AC air diverted to where you need it. … Popularity: 15. Popularity: 2 Would you recommend using oil to heat your home or is electric heat more efficient? Oil is the most expensive way to heat anything. Check your cost per kwh on your electric bill- easy way= forget all the blah blah and divide the total $ by the total kw. The most efficient heat source is a heat pump. For $400 worth of heat with electric or gas equipment, the same heat may cost you a… Popularity: 5 Why is gas so expensive? Because of the war in the middle east. We are fighting in Iraq and Afghanistan they make the prices so much. when we are done with this war it will be less. Popularity: 11 How do you remove paint from cast iron? Paint stripper, gloves, and a brush. Check with paint stores for stripper, and for the RIGHT gloves. Popularity: 6 Can beans be baked in cast iron dutch oven? Yes. Popularity: 1 Does wood melt? Answer eventua… Popularity: 19 What is a remedy for skunk odor removal? The best remedy for skunk odor is BON-CC-41. Other remedy's aren't as safe or good. Tomato juice is a old wives tale. It just masks the odor or tricks your nose into thinking the skunk odor is gone, but really your just smelling the tomato juice because they is the new scent. The hydrogen peroxide m… Popularity: 0 Where can you buy fire retardant spray? Sears Popularity: 25 Why does my pilit on gas fireplace not stay lit? you are probably out of gas, or if it(the tank) was recently filled, there may be an air bubble, that happened with mine. Also, there could be a dirt particle on the lense of the pilot blocking it, usually the propane or gas company can come out and check it or clean it if necessary. They told us t… Popularity: 30 How do you get rid of smoke smell? he smokes weed and mayfair he started when he was 12 he says he has'nt but he has x Popularity: 2 What makes a fire burn? Fuel plus Oxygen plus the addition of enough energy to begin the fire. Popularity: 10 How much does it cost to add a fireplace? In March 2011, Washington State, cost to add a gas fireplace and standard size/quality unit, including construction costs and plumbing gas lines where service already exists: about $5,000. Could cut costs to as little as $3,500-$4,000 by going with a very inefficient smaller unit, but if you're spe… Popularity: 10 Is wood from pine trees good for firewood? Contrary to popular belief, burning Pine does not cause any more creosote buildup than hard wood. A "cool" fire with any wood is what causes creosote buildup. I burn about a 50-50 mix of hardwoods and pine. I check my chimney twice a year and I've never (I repeat never) had any creosote buildup in m… Popularity: 12 How many bathrooms are in the white house? 35 bathrooms Popularity: 2 How do you install a wood stove? There are many factors to consider when installing a wooden stove. The first step when installing is to look out for the one that suits your requirements. Some wood stoves are customized for your fireplace and are generally easier to install than the freestanding wood stove. Once the stove has been … Popularity: 2 Is rubber tree wood safe for burning in a fireplace? Rubber tree wood is not safe for burning in a fireplace indoors. Rubber tree would can be burned in an outside fire pit. Sometimes the fumes from rubber tree wood can be toxic. Popularity: 1 Do Fireplace heat reflectors really work? Answer 1 Yes, IF they are installed to fully meet the specifications of the manufacturer, AND IF the reflecting surface is kept clean. Popularity: 34 What is the height of chimney? According to NFPA 211 (The national standard for fireplaces) Chimneys SHALL stand 3' above the roofline AND 2' above anything within 10' (This is the 3-2-10 rule) Popularity: 4 How long does it take for human flesh to burn? Skin is another name for the human flesh. Human flesh is an organ and has cells within it. The human flesh will burn immediately when touched by fire. Popularity: 1 What is a Dutch oven? A Dutch oven is a cooking pot made out of iron (or, usually, cast iron), and with a tight-fitting lid. Popularity: 4 How good is hemlock to burn for heatig your house as to other firewood types? Here is a list of 50 or so different woods and their BTU/cord output. Hemlock ranks right below pine and comes in at 39th on the list. http:/ ALTERNATE ANSWER Hemlock, like pine, spruce and other evergreen trees is dangerous to use for heating your house. The am… Popularity: 27 Which gas in the air is needed for wood to burn? Answer: Burning of wood is a process of combustion. By definition if something is undergoing combustion oxygen must be involved in the reaction. Popularity: 24 Why does wood burn quickly when put on a fire that is already blazing? Answer Because heat is a big factor in starting and keeping a fire. You have air, and fuel, and the heat is already factored in. Yes, but that is not all The wood that is already there burning helps too because the fact that it is burning means that most of the wood is dry, and dry wood burns… Popularity: 20 How hot is fire? It depends on what you are burning but a candle burns at 760'F. Popularity: 12 How much does a cord of fire wood weigh? A cord is defined as 128 cubic feet (3.62 m3), corresponding to a woodpile 4 feet (122 cm) wide, 4 feet (1.2 m) high and 8 feet (244 cm) long. So the weight would depend on the type of wood. Cherry or apple would weigh several times what balsa would. Also, the pieces of wood are irregular in shape … Popularity: 25 Can you install a wall mounted LCD TV close to a wood stove that can get very hot? Answer You can, but I wouldn't suggest that you do. Heat is one of the biggest killers of today's TV sets. That's why so many of them have all those built in fans. Popularity: 19 How much is a cord of wood? A cord of wood is worth around $180. The price may be different depending on the type of wood, as well as the season. Popularity: 3 How much heat each person produce? It varies with weight and activity, but a widely accepted approximation is 100 Watts, or 340 Btu/hr, averaged throughout the day. Popularity: 18 How do you make an wood burning sauna? you need a chimney, wood burning sauna stove and a sauna. That's pretty much it. Popularity: 1 Why is it bad to burn foam seat cushions in your wood stove? When foam burns it produces a lot of noxious chemical byproducts. A lot of these are toxic and can cause respiratory distress or even death. Popularity: 1 Can you burn gumwood in a fireplace? t… Popularity: 24 How do you get rid of the smell of smoke? In your mouth by chewing gum but on your hands you need hand sanitizer Popularity: 2 The origin of homeward fireplace stove inserts and if the company still exists and how to contact them? I too have a HOMEWARD STOVE INSERT. The owner's manual shows an address of 277 Industrial Park Drive, Lawrenceville, Georgia 30245. I think that they are no longer around or changed names or were bought out. I need to replace my door glass. Popularity: 1 How cool should a firebed be to allow closing the flue? Answer By flue, I believe you mean the damper in the chimney, which has the effect of closing off the flue. Close the damper any time you are not using the fireplace. Remember -heat rises -so heated air will also rise up and out the flue in the winter. In the summer it is also good to keep the fl… Popularity: 24 Can you burn firewood today - I heard of a law about calling or checking before we burn wood in our fireplace? It depends on what area you live in. If you have any doubts, contact your local fire department who should have the most accurate information. As for California, there are "no-burn" days in Central California. This is now the law in many other parts of the country as well. Popularity: 4 Is the poplar wood toxic why? Yes, poplar wood can be toxic to some individuals. However, it really is only harmful if you have allergies to wood. Popularity: 1 Can you melt wood? no Popularity: 0 Is sand a gas? Sand is a solid! Popularity: 1 Your car has no heat and you have changed everything on it and it still isn't working what could be wrong? If everything on your car ( thermostat, heater core, heat controls, water pump) has been changed and you are sure that it all works, check the water flow. A clog or an air bubble in the heat lines will cause coolant to not flow and have no heat output. Popularity: 1 Skunk under house? use fox pee. I know it sounds redneck but a fox is a skunks only NATURAL pred... I lived in a trailer home park and this kept me skunk free... lowes, dicks and gander mt sell it Popularity: 3 Is it safe to burn wood with varnish in a fireplace? NO! Thats Highly dangerous as varnished, painted, and green treated woods release terrible toxins as they burn. Residue from treated wood sticks to the chiminey liner, if enough is burned, a liquid form of the residue will acumulate and run down the chimney liner. Never burn painted, stained, or t… Popularity: 33 Is it safe to burn pinion wood in an indoor fireplace? Yes Popularity: 1 Why does your digital dash in a 1987 Nissan 300ZX turbo keep going out and then turns back on for a little bit But when it goes out it goes out for awhile can someone please help you? could be dirt contacts on the Power Supply Unit, or maybe a corroded wire that needs to be resoldered somewhere in between. the power supply unit is located right above your "right" knee when you drive, in the panel below the steering wheel. The PSU is 5x5x1, and shouldn't be too hard to get to. f… Popularity: 1 What is a soft fire brick? A fire brick, or refractory brick, is a block of ceramic material that can withstand high temperatures and is used to line furnaces, kilns, fireboxes, and fireplaces. For example, they use it to surround pipes, conduits, etc. in walls as fire breaks. Hard fire bricks are used on the inside of things… Popularity: 16 Is fire hot? Heat is one of the things needed to maintain a fire. The others are oxygen and fuel. The amount of heat throughout the flame varies. The color is an indicator of the temperature. Popularity: 1 How close should a sycamore be to your house? Sycamore trees have huge trunks that often split near the bottom. They grow best in open areas. A sycamore should not be planted closer than 15 feet to a building of any kind. Popularity: 1 Why do you hang stockings at the fireplace? A noble man lost his wife so he and his three daughters were left with nothing. They had no money. Then one day his daughters just got done washing their clothes and they hung their stockings by the fire place. Then that night St. Nicolas came and put gold in them and the daughters were able to marr… Popularity: 2 Do glass doors on a fireplace reduce the amount of warm air that get wasted up chimney? When a fireplace is not being used... glass doors are not a good way to stop cold air from exiting the chimney. The first thing that needs to be done is the fireplace damper should be closed to prevent this heat loss. If the damper is not-functional or absent a chimney balloon or chimney top damper … Popularity: 12 How many bedrooms are in the Biltmore house? it is 34 bedrooms Popularity: 0 How many Fireplaces are in the White House? White House Facts: There are 132 rooms, 32 bathrooms, and 6 levels to accommodate all the people who live in, work in, and visit the White House. There are also 412 doors, 147 windows, 28 fireplaces, 7 staircases, and 3 elevators. Popularity: 4 What are advantages and disadvantages of fire? type your ans here Popularity: 5 What are the Massachusettts building regulations for installing wood stoves? Most building regulations defer to the manufacturer's recomendations when it comes to appliances. I know this is true for fireplaces in my district. Popularity: 4 Can you burn olive wood in the fireplace? Answer Yes. And it gives off a lovely smell, too. Answer Olive wood is a very hard, condensed wood. It is very hard to catch alight, but once it burns, it does so for a long time and gives off a lot of heat. Popularity: 22 Why do farts burn? because they contain methane, which is a flammable substance.According to the TV show QI, the main flammable ingredient is hydrogen. (whatdoctor) Popularity: 3 How do you install flat screen TV over fireplace? If the fire is in use (especially if it is a coal fire - sooty and often hot) I would not fix a TV there. Otherwise, bolt special mounting plates directly onto the chimney-breast bricks, and run a power cable and aerial lead from the nearest source as neatly as possible, usually behind the skirting-… Popularity: 1 Does wood or silver melt or burn faster? You cannot melt wood,You cannot burn silver,within the above. Popularity: 19 How do you get rid of calcium stains on marble countertops? Mix 1/2 cup of ammonia with 1 gallon of water in a bucket. Use a stiff bristled brush to scrub the calcium deposits with the ammonia water solution. Let the solution remain on the calcium deposits for 10 minutes, then rinse with clean water that does not contain ammonia. Repeat the process if necess… Popularity: 1 What is a hearth? A hearth is a brick- or stone-lined fireplace or stove that is often used for cooking. A hearth can also be defined as the stone or brick floor of a fireplace. It often extends onto the floor of a room.A hearth is a brick- or stone-lined fireplace or stove that is often used for cooking. A hearth ca… Popularity: 6 What is the best wood-burning fireplace insert? pacific energy......great warranty Popularity: 1 How do you close a fireplace? If you have a damper in the fireplace there should be a handle that will allow you to close this damper or metal flapper. If the damper or metal flapper is missing, damaged or broken you can get parts to fix it or you can install a chimney balloon to plug the chimney at the bottom or a chimney top d… Popularity: 12 Is rhododendron a good wood to burn in a fireplace? ANSWER:Live rhodendron contains gryanotoxin, which is poisonous if ingested. When burned the gryanotoxin is destroyed at temperatures of 300 degres and above, and no evidence of toxicity has been found in the smoke or coals of the rhododendron plant. It is a hard long-burning wood and can be used sa… Popularity: 2 How can an active huge bee hive be removed from a chimney?. Popularity: 1 Can you burn Bradford Pear wood in the fireplace? Yes! you can burn any wood, if you are burning for supplementary heat anyway. If wood is your only heat source, i would suggest oak, locust, or hickory. they are the best. easy to split and produce a lot of heat and last a while. cherry is a good one too. Bradford pear is decent wood. i rank it as a… Popularity: 18 Who invented the wood fireplace? The wood fireplace goes back thousands of years and no one knows who invented it. Benjamin Franklin invented a particular kind of stove (not a fireplace) that was more effective and efficient than a fireplace in warming a room and in fuel consumption. The Franklin Stove was also easier and safer to … Popularity: 2 When was the gas fireplace invented? The first gas fireplace is unknown to me but the regular fireplace was made by Benjamin Franklin and it was called the Franklin stove Popularity: 1 My gas fireplace pilot light works perfectly. However when I flip the switch for the rest of the fireplace to turn on nothing happens.? I am having the same problem, possible one of two answers. It's either the switch or the valve. If you follow the cord from the switch it will have a black and a white wire attached to the gas valve under the fireplace. To rule out the switch as the problem take a ordinary paper clip and touch it to… Popularity: 7 How do you install a chimney liner for a wood stove? This repair is best handled by a professional, but if you can get your hands on a good quality 316 Ti alloy flexible stainless steel liner or rigid liner sections, you may be able to handle it with help from a friend. Most liners come with instructions from the manufacturer. Follow these closely. Ba… Popularity: 8 Where do farmers go when there is a fire? They go berserk. Popularity: 4 Where is location of fresh air intake in fireplace? There should be a louvered intake outside the fireplace, and a steel door covering the intake tunnel, inside the house located towards the front, and in the floor of the fireplace. Popularity: 1 Why does fire kill vampires? Why does fire kill humans. It's the same concept. >_> Popularity: 1 Clean limestone fireplace? The best way to clean a limestone fireplace is by using a pH neutral stone cleaner. This cleaner is used with warm water and a sponge. Popularity: 0 How many bedrooms are in the White House? 132 bedrooms and 35 Bathrooms and 412 and 147 windows and 28 fireplaes and 8 staircases and 3 elvators Popularity: 7 Can we replace the glass in our wood stove with galvanized steel? The zinc coating which is using for galvanizing emits toxic gas when burned. Popularity: 1 1 2 3 ... 14 >
http://www.answers.com/Q/FAQ/4748
CC-MAIN-2017-22
refinedweb
4,654
72.97
When I run LCS( 'human', 'chimp' ), I'm getting "h" instead of "hm". When I run LCS( 'gattaca', 'tacgaacta' ), I'm getting "g" instead of "gaaca". When I run LCS( 'wow', 'whew' ), I'm getting "ww" which is correct. When I run LCS( '', 'whew' ), I'm getting "" which is correct. When I run LCS( 'abcdefgh', 'efghabcd' ), I'm getting "a" instead of "abcd". What am I doing incorrectly? Here is my code: def LCS(S, T): array = '' i = 0 j = 0 while i < len(S): while j < len(T): if S[i] == T[j]: array += S[i] j += 1 i += 1 return array Figured it out thanks to the people next to me in the lab! It would also be nice to not run into snooty people on Stack Overflow every now and then. def LCS(S, T): # If either string is empty, stop if len(S) == 0 or len(T) == 0: return "" # First property if S[-1] == T[-1]: return LCS(S[:-1], T[:-1]) + S[-1] # Second proprerty # Last of S not needed: result1 = LCS(S[:-1], T) # Last of T not needed result2 = LCS(S, T[:-1]) if len(result1) > len(result2): return result1 else: return result2
https://codedump.io/share/fiAphSgYpgev/1/longest-common-subsequence-python-2-function
CC-MAIN-2017-26
refinedweb
200
77.27
During the recent Microsoft Ignite conference I heard questions related to hybrid and partner free/busy relationships quite often, so I wanted to write about it. This scenario applies to companies with one or more external free busy relationships configured. For instance, you could have two or more companies on-premises sharing free/busy between each other. Then one fine day, one of the companies deploys a hybrid configuration and moves mailboxes to Office 365; in our example, it is Tailspin. Suddenly, Contoso and Fabrikam users find availability information is not showing for the Tailspin mailboxes moved to Exchange Online. This blog post discusses why free/busy is broken and what you can do about it. Note: OAUTH is not a supported way to see free/busy between two on-premises organizations. Note: In most environments, the shared namespace “TailspinToys.com” can be used as the Target address for on-premises users and you would not need the additional namespace of onprem.TailspinToys.com. However, to account for all complex partnerships that could be in place, a unique namespace used as the target address will ensure free busy works properly.2. Create an Organization Relationship between Contoso on-premises and Tailspin in Exchange Online. For this Organization Relationship the domain name should be TailspinToys.Mail.onmicrosoft.com. 3. Make sure that you have a solution in place to sync mailbox enabled objects between Tailspin and Contoso. As a mailbox is moved from Tailspin on-premises to Tailspin online, Contoso needs to be made aware and the related objects’ Target Address needs to be updated in Contoso. This is needed to ensure we direct the free/busy requests to the correct premises the first time. This step can be achieved via Forefront Identity Manager (FIM) or with a script. Note: The domain name “TailspinToys.com” is not present in any of the Organization Relationships in the Contoso environment. Keeping this name out of the Organization Relationship will ensure that you can continue to use the shared namespace and see free/busy information. Thanks for sharing. This clarifies a lot and helps to understand a hybrid M&A much better. Awesome article...this helps how to share free/busy details with another organization in a Hybrid configuration I opened a call on this problem last week for two hybrid tenants. At Tier 2, they still have no idea what is going on. With this post and a few lines of PowerShell - all is now well with the world. I am curious about the reverse scenario. But I'll test that in my labs. But why doesn't the Tailspin OnPrem Exchange server return the target address to Contoso? In that case, I won't need to sync mailboxes with target address from Tailspin to Contoso. Scenario: Contoso user asks for F/B of cloud user "userA@tailspintoys.com" -> Contoso Exchange asks Tailspin Exchange -> Tailspin Exchange recognizes, that "userA@tailspintoys.com" has a target address with "userA@tailspintoys.mail.onmicrosoft.com" and provide this information to Contoso Exchange. Contoso Exchange will start / redirect F/B request against O365 (of course you need to configure the OrganizationRelationship from Contoso to tailspintoys.mail.onmicrosoft.com first). This would be my desired scenario.
https://techcommunity.microsoft.com/t5/Exchange-Team-Blog/The-Hybrid-Mesh/ba-p/605910
CC-MAIN-2019-43
refinedweb
537
56.55
BugTraq Back to list | Post reply Possible windows+python bug Mar 22 2005 12:21PM liquid cyberspace org (1 replies) This bug is produced on WindowsXP SP1 (OSVer : 5_1_2600) with Python2.3 installed. Start Python and type (of course x.x.x.x should be replaced with IP address): import socket s=socket.socket(socket.AF_INET,socket.SOCK_RAW,4) s.sendto("",("x.x.x.x",0)) Press ENTER and your win box should crash immediately. On my test after restart windows returned BCCode : d1. By the way, IP protocol 0x04 is "IP over IP", and I could send such datagrams month ago with Python (although Microsoft has crippled some protocols). Now, this is maybe specific to this configuration, or it could be due to some driver (BCCode: d1 is specific for drivers related problems). It needs further testing on different configurations. [ reply ] Re: Possible windows+python bug Mar 22 2005 06:37PM Neil Schemenauer (nas-bugtraq arctrix com) Privacy Statement
https://www.securityfocus.com/archive/1/393956
CC-MAIN-2019-26
refinedweb
160
55.95
Extracting Images from PDFs A quick recipe for extracting images embedded in PDFs (and in particular, extracting photos contained with PDFs…). For example, Shell Nigeria has a site that lists oil spills along with associated links to PDF docs that contain photos corresponding to the oil spill: Running an import.io scraper over the site can give a list of all the oil spills along with links to the corresponding PDFs. We can trawl through these links, downloading the PDFs and extracting the images from them. import os,re import urllib2 #New OU course will start using pandas, so I need to start getting familiar with it. #In this case it's overkill, because all I'm using it for is to load in a CSV file... import pandas as pd #url='' #Load in the data scraped from Shell df= pd.read_csv('shell_30_11_13_ng.csv') errors=[] #For each line item: for url in df[df.columns[15]]: try: print 'trying',url u = urllib2.urlopen(url) fn=url.split('/')[-1] #Grab a local copy of the downloaded picture containing PDF localFile = open(fn, 'w') localFile.write(u.read()) localFile.close() except: print 'error with',url errors.append(url) continue #If we look at the filenames/urls, the filenames tend to start with the JIV id #...so we can try to extract this and use it as a key id=re.split(r'[_-]',fn)[0] #I'm going to move the PDFs and the associated images stripped from them in separate folders fo='data/'+id os.system(' '.join(['mkdir',fo])) idp='/'.join([fo,id]) #Try to cope with crappy filenames containing punctuation chars fn= re.sub(r'([()&])', r'\\\1', fn) #THIS IS THE LINE THAT PULLS OUT THE IMAGES #Available via poppler-utils #See: #Note: the '; mv' etc etc bit copies the PDF file into the new JIV report directory cmd=' '.join(['pdfimages -j',fn, idp, '; mv',fn,fo ]) os.system(cmd) #Still a couple of errors on filenames #just as quick to catch by hand/inspection of files that don't get moved properly print 'Errors',errors Images in the /data directory at: The important line of code in the above is: pdfimages -j FILENAME OUTPUT_STUB FILENAME is the PDF you want to extract the images from, OUTPUT_STUB sets the main part of the name of the image files. pdfimages is actually a command line file, which is why we need to run it from the Python script using the os.system call. (I’m running on a Mac – I have no idea how this might work on a Windows machine!) pdfimages can be downloaded as part of poppler (I think?!) See also this Stack Exchange question/answer: Extracting images from a PDF PS to put this data to work a little, I wondered about using the data to generate a WordPress blog with one post per spill. provides a Python API. First thoughts were: - generate post containing images and body text made up from data in the associated line from the CSV file. Example data: So we can pull this out for the body post. We can also parse the image PDF to get the JIV ID. We don’t have lat/long (nor northing/easting) though, so no maps unless we try a crude geocoding of the incident site column (column 2). A lot of the incidents appear to start with a pipe diameter, so we can maybe pull this out too (eg 8″ in the example above). We can use things like the cause, terrain, est. spill volume (as a range?), and maybe also an identified pipe diameter, to create tags or categories for the post. This allows us to generate views over particular posts (eg all posts relating to theft/sabotage). There are several dates contained in the data and we may be able to do something with these – eg to date the post, or maybe as the basis for a timeline view over all the data. We might also be able to start collecting stats on eg the difference between the data reported (col 1) and the JIV date (col 3), or where we can scrape it, look for structure on the clean-up status filed. For example: Recovery of spilled volume commenced on 6th January 2013 and was completed on 22nd January 2013. Cleanup of residual impacted area was completed on 9th May 2013. If those phrases are common/templated refrains, we can parse the corresponding dates out? I should probably also try to pull out the caption text from the image PDF [DONE in code on github] and associate it with a given image? This would be useful for any generated blog post too? I’ve tried pdfimages on academic pdfs. Doesn’t work as desired. Don’t know if it’s the tool or just the awful way in which academic pdfs are made. Tended to get lots of tiny images out (seemingly pixel by pixel) rather than whole figures. Rather annoying! Ross Mounce (@rmounce) December 14, 2013 at 1:03 am Hi Ross.. I’m not really that familiar with how PDFs package images (in fact, how they package anything). Are there any other open source tools out there, I wonder, that are more specifically tuned to extracting images howsoever they are embedded in a PDF? Tony Hirst December 18, 2013 at 10:21 am
http://blog.ouseful.info/2013/12/01/extracting-images-from-pdfs/
CC-MAIN-2014-23
refinedweb
893
70.63
10 August 2012 11:38 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Scheduled maintenance at a couple of key Indian refineries, meanwhile, would lead to tighter naphtha supply going forward, they said. The Asian naphtha crack spread for second-half September rallied to $99.65/tonne against September Brent crude futures, the strongest since 14 May, according to ICIS data. “The reforming margin is still strong,” said a trader, referring to the price spread between motor gasoline and naphtha. Asian gasoline supply is currently in a shortfall, because refinery upsets in the Refinery shutdowns in Naphtha supply from key exporter – “There will be less Indian supply because of maintenance,” a trader said. Open-spec prices for the second-half September contract extended gains despite falling Brent crude futures in Friday afternoon trade. Prices rose by $20.50/tonne from Wednesday to $943.50-944.50/tonne CFR (cost and freight) (
http://www.icis.com/Articles/2012/08/10/9585812/asia-naphtha-crack-spread-rallies-to-near-100tonne.html
CC-MAIN-2014-35
refinedweb
150
57.67
Revision history for Dist-Zilla-Plugin-MetaProvides-FromFile 2.001002 2017-03-07T14:15:57Z 91340b1 - Bugfix: Avoid test failures with -Ddefault_inc_excludes_dot - Removed use of ConfigDumper - Removed use of dztest [Dependencies::Stats] - Dependencies changed since 2.001001, see misc/*.deps* for details - develop: +5 ↑2 -1 (suggests: ↑2) - runtime: -1 - test: ↓1 -1 2.001001 2015-06-06T13:39:39Z f6c555d [Bugs] - Make reader_name a string because ClassName requires it to be loaded, but we internally load things. Closes #1. Thanks PJFL for reporting. [Dependencies::Stats] - Dependencies changed since 2.001000, see misc/*.deps* for details - configure: +1 (recommends: ↑1) - develop: +9 ↑3 -2 (recommends: +1, suggests: ↑2) - runtime: ↓1 -1 (recommends: +1) - test: +1 (recommends: ↑1 ↓1) 2.001000 2014-09-04T18:28:48Z [00 Minor] - now dumps configuration. - Tests improved. [Dependencies::Stats] - Dependencies changed since 2.000001, see misc/*.deps* for details - develop: +1 ↑1 (suggests: ↑2) - runtime: +1 - test: +2 -1 [Tests] - Reimplemented with dztest 2.000001 2014-08-16T00:23:28Z [00 Trivial] - CPANDAY! - no code changes. - tests augmented. - whitespace adjusted. [Dependencies::Stats] - Dependencies changed since 2.000000, see misc/*.deps* for details - develop: +1 (suggests: ↑1) - test: -3 [Misc] - Improve source side POD to reduce build side whitespace. [Tests] - update test::reportprereqs 2.000000 2014-07-31T04:49:22Z [00 Major] - Sizeable changes that may impact downsteams that work by hand. [01 Tooling] - Tooling switched to ExtUtils::MakeMaker - Dependency ramping softened. - [02 Version Scheme] - Version scheme shorted from x.yyyyyyyy ( Mantissa =8 ) to x.yyyyyy ( Mantissa = 6 ) - This is the primary reason for the 2.x on the box. - [Dependencies::Stats] - Dependencies changed since 1.11060211, see misc/*.deps* for details - build: -1 - configure: +1 -1 (recommends: +1) - develop: +44 -2 (recommends: -1, suggests: +1 ↑1) - runtime: +2 ↑1 -2 - test: +5 ↓1 -2 (recommends: +3) [Misc] - Whitespace padded by replace_with_blank - use Module::Runtime instead of Class::Load - Don't use Autobox - Tigher to critic - Many generated test updates 1.11060211 2013-04-08T09:49:35Z [Documentation] - Greatly improve previously lacklustre documentation so its not confusing for people who didn't read the MetaProvides docs first. 1.11060210 2013-04-08T08:33:46Z - Maintenance Release for Module::Build 0.4004 [Dependencies::Stats] - Dependencies changed since 1.11060209, see misc/*.deps* for details - build: ↑1 - configure: ↑1 - develop: +5 (recommends: ↑1, suggests: ↑1) - test: ↑1 [Documentation] - Update Copyright year. - Add README.mkdn [Meta] - Bugtracker to github [Packaging] - Update Build.PL for new test_requires feature 1.11060209 2012-02-02T20:02:50Z - Maintenance release. [Dependencies::Stats] - Dependencies changed since 1.11060208, see misc/*.deps* for details - develop: (suggests: ↑1) - runtime: +3 - test: -1 [Internals] - All namespaces now declare $AUTHORITY - $VERSION declarations moved outside BEGIN [Packaging] - Update LICENSE ( Year, Indent, Address ) - Move extra-tests to xt/ - GIT urls moved to https:// - declares x_authority - Git Versions 1.11060208 2011-04-05T20:05:45Z - Minor changes only, mostly infrastructural. [Dependencies] - Now depends on Class::Load [Dependencies::Stats] - Dependencies changed since 1.11034201, see misc/*.deps* for details - develop: +1 (recommends: +1, suggests: +1) - runtime: +1 - test: +1 [Features] - Now uses Class::Load instead of eval() for loading specified readers. This increases security somewhat. [Packaging] - Moved to @Author::KENTNL - Critic is now stricter. - Ship .perltidyrc - Reworked Changes for CPAN::Changes style. - Moved perlcriticrc to perlcritic.rc - Remove inc/* - Use Bootstrap::lib - Fix prereq -> prereqs [Tests] - Dropped handwritten perlcritic tests in favour of generated ones. - Dropped portability tests. - Added CPAN::Changes tests. 1.11034201 2010-07-24T13:43.
https://metacpan.org/changes/distribution/Dist-Zilla-Plugin-MetaProvides-FromFile
CC-MAIN-2017-34
refinedweb
576
53.27
Figure size in different units¶ The native figure size unit in Matplotlib is inches, deriving from print industry standards. However, users may need to specify their figures in other units like centimeters or pixels. This example illustrates how to do this efficiently. import matplotlib.pyplot as plt text_kwargs = dict(ha='center', va='center', fontsize=28, color='C1') Figure size in inches (default)¶ plt.subplots(figsize=(6, 2)) plt.text(0.5, 0.5, '6 inches x 2 inches', **text_kwargs) plt.show() Figure size in centimeter¶ Multiplying centimeter-based numbers with a conversion factor from cm to inches, gives the right numbers. Naming the conversion factor cm makes the conversion almost look like appending a unit to the number, which is nicely readable. cm = 1/2.54 # centimeters in inches plt.subplots(figsize=(15*cm, 5*cm)) plt.text(0.5, 0.5, '15cm x 5cm', **text_kwargs) plt.show() Figure size in pixel¶ Similarly, one can use a conversion from pixels. Note that you could break this if you use savefig with a different explicit dpi value. px = 1/plt.rcParams['figure.dpi'] # pixel in inches plt.subplots(figsize=(600*px, 200*px)) plt.text(0.5, 0.5, '600px x 200px', **text_kwargs) plt.show() Quick interactive work is usually rendered to the screen, making pixels a good size of unit. But defining the conversion factor may feel a little tedious for quick iterations. Because of the default rcParams['figure.dpi'] = 100, one can mentally divide the needed pixel value by 100 1: plt.subplots(figsize=(6, 2)) plt.text(0.5, 0.5, '600px x 200px', **text_kwargs) plt.show() - 1 Unfortunately, this does not work well for the matplotlib inlinebackend in Jupyter because that backend uses a different default of rcParams['figure.dpi'] = 72. Additionally, it saves the figure with bbox_inches='tight', which crops the figure and makes the actual size unpredictable. References The use of the following functions, methods, classes and modules is shown in this example: Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_size_units.html
CC-MAIN-2022-05
refinedweb
341
51.44
lightweight python package for finding the timezone of any point on earth (coordinates) Project description timezonefinderL is the faster and lightweight, but inaccurate version of the original timezonefinder. Use this package in favour of timezonefinder when memory usage and speed matter more to you than accuracy. Only the function timezone_at() is being supported and numba cannot be used for precompilation. The commands need to modified: pip install timezonefinderL from timezonefinderL import TimezoneFinder tf = TimezoneFinder() longitude, latitude = 13.358, 52.5061 tf.timezone_at(lng=longitude, lat=latitude) # returns 'Europe/Berlin' For everything else please refer to the original Documentation. Operating Principle Instead of storing timezone polygons and checking which polygon a query point is included in, like with the vanilla timezonefinder, this package uses only the precomputed shortcuts to instantly lookup a timezone. The zone which has the highest amount of timezone polygons (not covered surface!) in a shortcut is instantly being returned. This requires far less memory and computing time, but of course is not accurate close to the borders of two neighbouring timezones. The size of the shortcuts (<-> accuracy) is equal to the one used in the vanilla timezonefinder (1 shortcut per degree longitude, 2 per degree latitude, 260KB binary file size). In order to increase the accuracy (more and smaller shortcut rectangles), increment the parameters NR_SHORTCUTS_PER_LNG and NR_SHORTCUTS_PER_LAT in global_settings.py and compile a new binary shortcut file by running file_converter.py. Speed Test Results: obtained on MacBook Pro (15-inch, 2017), 2,8 GHz Intel Core i7 It can be seen that timezonefinderL is roughly one order of magnitude faster than timezonefinder: Speed Tests: ------------- "realistic points": points included in a timezone in memory mode: False testing 100000 realistic points total time: 0.5513s avg. points per second: 1.8 * 10^5 testing 100000 random points total time: 0.5682s avg. points per second: 1.8 * 10^5 in memory mode: True testing 100000 realistic points total time: 0.1688s avg. points per second: 5.9 * 10^5 testing 100000 random points total time: 0.1837s avg. points per second: 5.4 * 10^5 Most certainly there is stuff I missed, things I could have optimized even further etc. I would be really glad to get some feedback on my code. If you notice that the tz data is outdated, encounter any bugs, have suggestions, criticism, etc. feel free to open an Issue, add a Pull Requests on Git or … contact me: [python] {-at-} [michelfe] {-*dot-} [it]* License timezonefinder is distributed under the terms of the MIT license (see LICENSE.txt). Also see: GitHub, PyPI, GUI and API of the outdated timezonefinderL timezonefinder, Changelog 4.0.2 (2019-05-23) - MAJOR UPDATE: only the function timezone_at() is being supported - not based on the simplification of the timezone polygons any more (not easily achievable with the new boundary data set) - use the precomputed shortcuts to instantly look up a timezone (“instant shortcut”, most common zone of the polygons within that shortcut) - updated the code to the status of the current timezonefinder main package v4.0.2 - data in use now is timezone-boundary-builder 2019a - described options for increasing the accuracy in readme - dropped python2 support 2.0.1 (2017-04-08) - added missing package data entries (2.0.0 didn’t include all necessary .bin files) 2.0.0 (2017-04-07) - introduction of this version of timezonefinder - data has been simplified which affects speed and data size. Around 56% of the coordinates of the timezone polygons have been deleted and around 60% of the polygons (mostly small islands) have been included in the simplified polygons. For any coordinate on landmass the results should stay the same, but accuracy at the shorelines is lost. This eradicates the usefulness of closest_timezone_at() and certain_timezone_at() but the main use case for this package (= determining the timezone of a point on landmass) is improved. - file_converter.py has been complemented and modified to perform those simplifications - introduction of new function get_geometry() for querying timezones for their geometric shape - added shortcuts_unique_id.bin for instantly returning an id if the shortcut corresponding to the coords only contains polygons of one zone - data is now stored in separate binaries for ease of debugging and readability - polygons are stored sorted after their timezone id and size - timezonefinder can now be called directly as a script (experimental with reduced functionality, see readme) - optimisations on point in polygon algorithm - small simplifications in the helper functions - clarification of the readme - clarification of the comments in the code - referenced the new conda-feedstock in the readme - referenced the new timezonefinder API/GUI for older versions refer to timezonefinder. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/timezonefinderL/
CC-MAIN-2019-47
refinedweb
794
51.58
Alembic baking round 2 [SOLVED] On 07/04/2015 at 00:55, xxxxxxxx wrote: So I got back into it, and discover that the points link method will indeed work in some form of fashion. I'm tackling this from a different perspective, I generate a Spline object from a number of alembic curve manually this time, but all I get in the end is a mass of points, not a bunch of segments with individual splines. I was curious as to what i may be doing wrong import c4d import os, hashlib, csv from c4d import bitmaps, gui, plugins, documents, utils from types import * if __name__ == '__main__': # Get active doc doc = c4d.documents.GetActiveDocument() # Get project current start and end frames startFrameValue = int(doc.GetMinTime().GetFrame(doc.GetFps())) endFrameValue = int(doc.GetMaxTime().GetFrame(doc.GetFps())) #pointCount = op.GetGuides().GetPointCount() newSpline = c4d.SplineObject(0, c4d.SPLINETYPE_LINEAR) #newSpline = c4d.BaseObject(c4d.Ospline) # Create new null newSpline.SetName("hair_baked") doc.InsertObject(newSpline) extCurves = op.GetChildren() for curve in extCurves: realSpline = curve.GetRealSpline() obj = realSpline.GetClone() if(obj) : curvePntCnt = obj.GetPointCount() segPntStart = newSpline.GetPointCount() # the start of the new segment's points should be here print "Segment point start: " + str(segPntStart) # if the point count for this alembic curve is zero with shouldn't make a new segment if(curvePntCnt != 0) : newSpline.ResizeObject(newSpline.GetPointCount()+curvePntCnt, newSpline.GetSegmentCount()+1) print "Segment count: " + str(newSpline.GetSegmentCount()) print "Point count: " + str(newSpline.GetPointCount()) # transfer point positions from alembic curve to combined spline print "-----------Point vectors----------" for i in range(0, curvePntCnt-1) : print obj.GetPoint(i) newSpline.SetPoint(segPntStart + i, obj.GetPoint(i)) print "---------Point vectors end--------" c4d.EventAdd() On 07/04/2015 at 01:55, xxxxxxxx wrote: Hello, the segments can be defined with SplineObject.SetSegment(). You can set up your segments e.g. in the last for loop of your script. marginalia... (I always sent an update message if the bounding box of an object has changed. C4DAtom.Message(c4d.MSG_UPDATE)) Hope this helps! Best wishes Martin On 07/04/2015 at 06:44, xxxxxxxx wrote: Hello, as Martin said, you must use SetSegment() to define the segments of the spline you create. You find a example on how to us this function in the Double Circle plugin. Best wishes, Sebastian On 08/04/2015 at 01:11, xxxxxxxx wrote: there were just a few more complications but the SetSegment advice and several hours of looking at documentation and point counts got me the rest of the way. Once I had the splines showing up I began to realize C4Ds hair system uses the same point count for each guide, whereas upon export/import that count gets jacked up. Incorporating a guide point count and the ID_MODELING_SPLINE_ROUND_TOOL via SendModelingCommand got me the rest of the way there.
https://plugincafe.maxon.net/topic/8635/11301_alembic-baking-round-2-solved/3
CC-MAIN-2020-45
refinedweb
464
58.18
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » Where do I find the C code of the functions in the library. I understand that when you create a library of functions, it takes three different files to do so. First you create the function in one C program, then you create an h file, and then you create the main program that accesses the function. For example, if we want to write a program that uses the LCD, we need three files which is our main program, lcd.h, and lcd.c. lcd.c contains the acual function and it could easily be viewed in the Nerdkit library. However, I am having difficulty finding the actual code written for many different AVR functions. For example let's say that I want to see the actual function for some some of the math function like "log10()" or "sqrt()". I know the h file is written in "math.h" and I know where that is located. However,the h file doesn't show the actual code for the function. Shouldn't there be a c file (like math.c for exampe) that shows the code for the actual functions? I looked all over for it and couldn't find it. Where is it located and what is the c file called? That is part of the GCC AVR Toolchain Distribution. That is not code that Nerdkits has any hand in creating. The GCC AVR Toolchain is distributed as a unit that has been tested and works. That would be something that is never changed, or modified. That is why you will not find them in the standard installation. I think this location has the Source Files for the Libc Library. (It is not something I have ever looked into so I may be incorrect.) GCC AVR Libc Source First of all, I understand that Nerdkit didn't make the "math.h" function. I merely tried to compare an h file written by Nerdkits with one that came with the AVR library. Secondly, I am confused as to why I can't see the actual code that makes a lot of these functions. Is there any place that I could actually see it or is it something that is hidden by the ATMEL corporation. Thirdly, although it would be nice to actually see the code used to make the AVR functions I don't really need it. I just want to understand how to use these functions. Obviously a lot of the functions used for "math.h" are pretty obvious. The name basically tells what they do. So the sqrt() function finds the square root of a number, and the sin() function finds the sine of a number. However, I see many of these AVR functions in code, like fdevopen() and FDEV_SETUP_STREAM(),and I am not sure what they do exactly. I have previously viewed the AVR Website and I do see all of the AVR h files and it briefly describes each function in the library. However, I feel that it doesn't explain a lot of the functions too clearly for someone not familiar with the functions. I believe that that the Website is just used to remind people how certain functions work. However, I need something that explains those functions to me in a clearer way but I couldn't find the information. I have also read a C programming book and could not find a reference to many of those functions. Does anybody know where I could find information that will give me a better explanation of all the AVR functions. The gcc compiler is open source and you can see all of it if you wish. Same goes for avr libc. Those are the places where you will find the sources to all the built-in functions. Jim's link will get you started. I haven't needed to look at it but I'm sure it's complex code and if your programming skills are less than advanced you'll have a hard time figuring out where everything is as well as what exactly it is doing. I have referenced the libc documentation quite a bit and that is where you can find information on FDEV_SETUP_STREAM(). I also search the internet whenever I need a better understanding of how something works. Usually I find many examples as well as various explanations on a topic. Occasionally I can't find the answer and post a question on AVR Freaks. Answers typically come in a couple of hours. However the AVR Freak folks may roast you if you don't exhaust the search engine results first, they know if the answer is already out there or not. Otherwise they're real nice. If you're a windows guy then you'll probably have a rough time with open source stuff because most of it is developed on linux and sometimes difficult to follow if you're not familar with the linux environment. If that's the case you should 86 your windows box and get started with Ubuntu or one of the other linux distro's and start learning. I did that a while back and it was tough at first but now I couldn't be happier. For example, I downloaded the avrdude source and compiled it on my system so I could put debug messages in various places to figure out how it works. Did the same thing with GTKterm because it was broken so I downloaded, built, and then fixed it and it works great now. So far all the open source stuff I am interested in is written in C and that makes it easier since C is my primary language anyway. And my absolute favorite part is I don't spend money on Windows or Windows software any more which means I can buy more parts and fun stuff for my avr projects! NOTER: I have just installed Linux mint16 cinnamon. So far (my second day) have done some things with it. It looks & kinda works like my old XP. Now to try to load some of the windows programs I need to use... Jim Good move! I think you'll be happy in the long run. In my case the only windows program I really care about is for my MSO-19 logic analyzer/scope and it wouldn't work with the windows emulator (Wine) so I installed a VMware Player virtual machine and loaded XP on it and now I can run the analyzer. I keep thinking I will get a linux compatible analyzer but haven't yet. One of the first things I did on linux was get the toolchain installed and working so I could carry on with my avr projects. As I recall it wasn't too hard but as my first task it took some time to get it going. I has been a while since I wrote on this thread but I want to ask again because I didn't really understand the answers to my question. I am not sure that I was clear enough with my question so I will try to reword it. If a programmer wants to create a shared function file in c, they need to use three files. They are the header file which sets up the function parameters, the implementation file which implements the function, and a main program which calls the function. I created a simple example of a shared function file that simply adds two numbers together. First I created the header file and called it "header.h". This is the code. int add(int,int); Then I created an implementation file called "implementation.c" #include "header.h" int add(int a, int b) { int c =a+b; return c; } Finally, I created the main program which uses the add function file and called it main.c: #include <stdio.h> #include "header.h" int main() { int a=2; int b=5; int c=add(a,b); printf ("c = %d",c); return 0; } It is easy enough to understand the header and implementation files of functions that are created by myself, other programmers, and the Nerdkit creators. However, when I look at the standard header files that come with the compiler (math.h, stdio.h) they confuse me. The first problem I have is that I notice that I have multiple standard header files on my computer with the same name but different code on it. I believe that this is because I have different compilers on my computer which have different versions of the standard header file. For example, stdio.h for one compiler would be different code for stdio.h of another compiler and I have both of them on my computer. Is my theory accurate. If so, it leads to more questions. Does a standard header file like stdio.h do the same thing for one compiler as it does for another compiler? If so, then why is the code written for stdio.h different for each compiler? If stdio.h does different things for different compilers then why give them the same name? Wouldn't that lead to confusion? I still want to know where where the implementation files are to each header file. For example, I would assume if there is a header file called stdio.h and math.h then there should be a file somewhere on my computer called stdio.c and math.c which shows the implementation of these functions. I know that the implementation file doesn't have to have the same name as the header file but I assumed it should exist somewhere on my computer. I asked this question in my first post but I didn't really understand the answer that you all gave me, so I want a clear answer to my next question. Do the implementation files for the standard header files exist on my computer? Yes or no? If they do exist then what are their file names. If they don't exist then how is it possible? How could a header file do anything if it contains the function parameters to a function that doesn't exist? These standard header files have been confusing me for a while so if you provide with an answer please be as specific as possible and thanks again for all your help. jmuthe, No, the compiler maker didn't provide you with their source files-- you don't need them. Email me, I'll explain it. BM Thanks BobaMosFet. I can't E-mail you though because you have not given me you E-mail address so I will give you mine. It is jmuthe@yahoo.com. It is not my main E-mail account so it is no big deal if I give it out. I appreciate your help but I wonder why you want me to exchange the information through E-mail. Why not provide the information on this forum so that everyone can have access to the answer. My email is listed elsewhere here on the site, sorry, I thought everyone had found it. I'm on a cell right now, would address the other when I got to a terminal. There are 3 kinds of files. '.c' (which is program logic), '.h' (which is mostly for external references), and shared libraries (compiled linkable objects-- like stdio). In many cases, for performance reasons, compiler makers don't provide c sourcecode to their libraries because how they do something may be proprietary, it may be written in assembly, or they may just not want to. What they do provide is a linkable library and a header file. I'm sure in the case of GNU, if you look on Google however, you can find sourcecode to the stdio library on GNU's site or there official repository. So when you include a .h file for stdio and call stdio functions, the compiler gives you access to only what you need to know about within the library via the header file, and it links in just the pieces of the library you use (lower-quality or early compilers linked the entire object in). So you are saying that the implementation file in C doesn't exist on my computer. Are you saying that the people who created the compiler, created an implementation file in whatever programming language that they wanted? Then those files were automatically attached to the compiler when we downloaded it so that the compiler automatically reacts to the standard header files? This is how I interpret what you are saying. Is it correct or not? The 'implementation' file, the '.c' (source) file (more than one was used to make the library for stdio) is not included on your computer. The people who wrote the compiler wrote all the 'implementation' files in either C or assembler, or a mixture of both. Then they precompiled all of their source files for the basic functions that C supports into a '.a' file. A '.a' file is an archive of object ('.o') files that are relatively addressed so pieces can be pulled in and compiled into your program when Make is executed (by the linker and the assembler). For GCC, the file in question is: 'libgcc.a' found in the install directories where the compiler was installed - There are others for specific additions and for other reasons, but all the basic 'required' C functions are in libgcc.a. Thanks for the info. You have helped me get a better understanding of "C" & compilers. Jim Please log in to post a reply.
http://www.nerdkits.com/forum/thread/2846/
CC-MAIN-2018-17
refinedweb
2,284
73.17
Hi everyone, my friend Carissa is having problems with her computer science program, so i researched everything i could about it and i found this forum. I am not familiar with the "technical terms" of computer science, but i will give you all that she sent me. Any help/hints would be great. So here's what she said. " I have completed my assignment and I have no idea why the "Ants" in my program won't move. My assignment is: GOAL: Create a simple 2D simulation of ants crawling around. This is the first half of a predator-prey simulation described in the textbook as Project #4, p.519 (at the end of Chapter 8). The second assignment this semester will be to add the doodlebugs. The ants live in a world composed of a 20 x 20 grid of cells. Only one ant may occupy a cell at a time. The grid is enclosed, so an ant is not allowed to move off the edges of the grid. Time is simulated in time steps. Each ant performs some action every time step. The ants behave according to the following model: Move. Every time step, randomly try to move up, down, left, or right. If the cell in the selected direction is occupied or would move the ant off the grid, then the ant stays in the current cell. Breed. If an ant survives for three time steps (which all our ants will do in this first assignment since there are no doodlebugs to eat them),. Write a program to implement this simulation and draw the world using ASCII characters (say "A" for an ant and "." for an empty cell). Initialize the world with 20 ants in random starting positions. After each time step, draw the world and prompt the user to press Enter to move to the next time step (like I did in HorseRacing). Hand-in: Submit your Java code and a sample run showing 5 or 6 time steps of a simulation using the dropbox on the class D2L site. Be sure your code contains comments saying who wrote it and why. Java Scripts: File: Ant /** * @author Carissa Roberts * @version 1.23.2013 * Assignment 1 * * Represents an 'Ant' and extends the Organism class * Overrides the breed, move, and toString methods from the Organism class. */ public class Ant extends Organism { private int row; private int col; private int stepsSurvived; /** * Constructors call the super constructors (i.e. the constructor from the Organism class) * and initialize the stepsSurvived variable to zero */ public Ant() { super(); stepsSurvived = 0; } public Ant(World grid, int r, int c) { super(grid, r, c); stepsSurvived = 0; } /** * Spawns another Ant in an adjacent location if the stepsSurvived * is three and if the new placed location is null. * Then the stepsSurvived counter is reset to 0. */ public void breed() { if(stepsSurvived == 3) { if((col > 0) && (world.getOrgAt(row, col-1) == null)) { Ant a = new Ant(world, row, col-1); stepsSurvived = 0; } else if ((col < world.SIZE-1) && (world.getOrgAt(row, col+1) == null)) { Ant a = new Ant(world, row, col+1); stepsSurvived = 0; } else if((row > 0) && (world.getOrgAt(row-1, col) == null)) { Ant a = new Ant(world, row - 1, col); stepsSurvived = 0; } else if((row < world.SIZE-1) && world.getOrgAt(row+1, col) == null) { Ant a = new Ant(world, row + 1, col); stepsSurvived = 0; } } stepsSurvived = 0; } /** * returns a printable symbol to represent an Ant */ public String toString() { return "A"; } /** * Uses Math.random to randomly decide which direction an Ant will move. * If the cell in the selected direction is occupied or would move the ant off the grid, then the ant stays in the current cell. */ public void move() { if(row < world.SIZE) { if(col < world.SIZE) { //generates a random double number to decide which direction to move double x = Math.random(); //move up if(x < 0.25) { if((row > 0) && (world.getOrgAt(row -1, col)== null)) { world.setOrgAt(row-1, col, world.getOrgAt(row, col)); world.setOrgAt(row, col, null); row--; } } //move right else if(x < 0.5) { if ((col < world.SIZE -1) && (world.getOrgAt(row, col + 1) == null)) { world.setOrgAt(row, col + 1, world.getOrgAt(row, col)); world.setOrgAt(row, col, null); col++; } } //move down else if ((x < 0.75) && (row < world.SIZE -1)&& (world.getOrgAt(row + 1, col)== null) ) { world.setOrgAt(row+1, col, world.getOrgAt(row, col)); world.setOrgAt(row, col, null); row++; } //move left else { if((col > 0)&& (row < world.SIZE - 1) && (world.getOrgAt(row, col -1) == null)) { world.setOrgAt(row, col-1, world.getOrgAt(row, col)); world.setOrgAt(row, col, null); col--; } } } stepsSurvived++; } } } File: World /** * @author Carissa Roberts * @version 1.23.2013 * Assignment 1 * * Represents the 'World" or the 'Grid' in which the Ants live, move, and breed. * */ import java.util.Scanner; import java.util.Random; public class World { /** * creates a grid and sets all of the spaces to null. */ protected final static int SIZE = 20; protected static Organism [][] land = new Organism[SIZE][SIZE]; public World() { for (int r = 0; r < land.length; r++) { for(int c = 0; c < land[r].length; c++) { land[r][c] = null; } } } /** * returns the Organism in the sent (parameter) location. * Returns null if the location is empty. */ public Organism getOrgAt(int r, int c) { if((r >= 0) && (r < World.SIZE) && (c >= 0) && (c < World.SIZE)) { if(land[r][c] == null) { return null; } else { return land[r][c]; } } return null; } /** * places the the organism in the given location */ public void setOrgAt(int r, int c, Organism organism) { if((r >= 0) && (r < World.SIZE) && (c >= 0) && (c < World.SIZE)) { land[r][c] = organism; } } /** * Uses a for loop to display the grid. If there is nothing in the location a . is shown, * otherwise the correct toString() method is called. */ public void display() { System.out.println("\n"); for (int r = 0; r < land.length; r++) { for(int c = 0; c < land[r].length; c++) { if(land[r][c] == null) { System.out.print("."); } else { System.out.print(land[r][c].toString()); } } System.out.println(""); } } /** * runs through one step simulation by checking to move the ants and breed them */ public void moveOneStep() { for (int r = 0; r < land.length; r++) { for(int c = 0; c < land[r].length; c++) { if(land[r][c] != null) { land[r][c].setHasMoved(false); } } } //check for ants and make them move if the boolean variable is false for(int r = 0; r < land.length; r++) { for(int c = 0; c < land[r].length; c++) { if((land[r][c] != null) && (land[r][c].getHasMoved() == false)) { land[r][c].move(); land[r][c].setHasMoved(true); } } } //check the breed method, and breed after the ants have moved. for(int r =0; r < land.length; r++) { for(int c = 0; c < land[r].length; c++) { if((land[r][c] != null) && (land[r][c].getHasMoved() == true)) { land[r][c].breed(); } } } } public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); //creates a new World and Random object World grid = new World(); Random random = new Random(); System.out.println("Ant Simulation. Press Enter to simulate one time step. "); grid.display(); /** * Uses a for loop to initialize all ants and place them in the grid and in the Ant array */ int tempRow; int tempCol; Ant[] ants = new Ant[20]; for(int numAntsAdded = 0; numAntsAdded <= 19; numAntsAdded++) { tempRow = random.nextInt(20); tempCol = random.nextInt(20); if(land[tempRow][tempCol] == null) { ants[numAntsAdded] = new Ant(grid, tempRow, tempCol); } else { //does not add an Ant or increment the counter if the location is already filled numAntsAdded--; } } /** * Run through a couple simulations */ String temp = keyboard.nextLine();(); } } File: Organism /** * @author Carissa Roberts * @version 1.23.2013 * Assignment 1 * * An abstract class to represent an 'Organism' in the created grid 'World'. * */ public abstract class Organism { private int row; private int col; protected int stepsSurvived; protected boolean hasMoved; protected World world; /** * Organsim constructors initialize the variables. * The second constructor * accepts a World object, a row, and a column for varaibles. */ public Organism() { hasMoved = false; stepsSurvived = 0; row = 0; col = 0; } public Organism(World world, int row, int col) { this.world = world; hasMoved = false; stepsSurvived = 0; this.row = row; this.col = col; world.setOrgAt(row, col, this); } /** * accessor and mutator methods get and set the value of the hasMoved variable */ public boolean getHasMoved() { return hasMoved; } public void setHasMoved(boolean hasMoved) { this.hasMoved = hasMoved; } /** * returns a printable symbol to represent an Organism */ public String toString() { return "O"; } /** * abstract methods to be overrided in extended classes */ public abstract void move(); public abstract void breed(); } The programs associated with this is either Blue J or Eclipse. If someone could please take the time to help figure out what's wrong with her code, that would truly be great. Thanks in advance -Mitsuwa
http://www.javaprogrammingforums.com/object-oriented-programming/22810-not-sure-what-logic-error-%5Bhelp%5D.html
CC-MAIN-2015-32
refinedweb
1,440
68.57
Let's start out with a simple micro benchmark: using System;using System.Threading; class Program{ public static void Main() { int start = Environment.TickCount; double[] d = new double[1000]; for (int i = 0; i < 1000000; i++) { for (int j = 0; j < d.Length; j++) { d[j] = (double)(3.0 * d[j]); } } int end = Environment.TickCount; Console.WriteLine(end - start); }} On my system this takes about 7 seconds when run in optimized mode (i.e. not in the debugger). Here's the optimized x86 code generated by the 2.0 CLR JIT for the body of the inner loop: fld qword ptr [ecx+edx*8+8] ; d[j] fmul dword ptr ds:[007B1230h] ; * 3.0 fstp qword ptr [esp] ; (double) fld qword ptr [esp] ; (double) fstp qword ptr [ecx+edx*8+8] ; d[j] = There first thing that jumps out is that the double cast takes two x87 instructions, a store and a load. Part of the reason the cast is expensive is because the value has to leave the FPU and go to main memory and back into the FPU. In this particular case it turns out to be very expensive, because esp happens to be not 8 byte aligned. esp Making a seemingly unrelated change can make the micro benchmark much faster, just adding the following two lines at the top of the Main method will make the loop run in about 2.3 seconds on my system: double dv = 0.0; Interlocked.CompareExchange(ref dv, dv, dv); The reason for this performance improvement becomes clear when we look at the method prologue in the new situation: push ebp mov ebp,esp and esp,0FFFFFFF8h push edi push esi push ebx sub esp,14h This results in an 8 byte aligned esp pointer. As a result the fstp/fld instructions will run much faster. It looks like a "bug" in the JIT that it doesn't align the stack in the first scenario. fstp/fld Of course, the much more obvious question is: Why does the cast generate code at all, isn't a double already a double? Before answering this question, let's first look at another minor change to the micro benchmark. Let's remove the Interlocked.CompareExchange() again and change the inner loop body to the following: Interlocked.CompareExchange() double v = 3.0 * d[j]; d[j] = (double)v; With this change, the loop now takes just 1 second on my system. When we look at the x86 code generated by the JIT, it becomes obvious why: fld qword ptr [ecx+edx*8+8] fmul dword ptr ds:[002A1170h] fstp qword ptr [ecx+edx*8+8] The redundant fstp/fld instructions are gone. Back to the question of why the cast isn't always optimized away. The reason for this lies in the fact that the x87 FPU internally uses an extended 80 bit representation for floating point numbers. When you explicitly cast to a double, the ECMA CLI specification requires that this results in a conversion from the internal representation into the IEEE 64 bit representation. Of course, in this scenario we're already storing the value in memory, so this necessarily implies a conversion to the 64 bit representation, making the extra fstp/fld unnecessary. Finally, in x64 mode all three variations of the benchmark take 1 second on my system. This is because the x64 CLR JIT uses SSE instructions that internally work on the IEEE 64 bit representation of doubles, so the cast is optimized away in all situations here. For completeness, here's the code generated by the x64 JIT for the inner loop body: movsd xmm0,mmword ptr [rcx] mulsd xmm0,mmword ptr [000000C0h] movsd mmword ptr [rcx],xmm0 I made another 0.34 update, since 0.36 is probably still a ways off. Changes:
http://weblog.ikvm.net/default.aspx?date=2007-08-07
CC-MAIN-2018-22
refinedweb
635
70.73
MSYNC(2) BSD Programmer's Manual MSYNC(2) msync - synchronize a mapped region #include <sys/types.h> #include <sys/mman.h> int msync(void *addr, size_t len, int flags);-1 succeeding locations will be flushed. Any required synchronization of memory caches will also take place at this time. Filesystem operations on a file that is mapped for shared modifications are unpredictable ex- cept after an msync(). The flags argument is formed by OR'ing the following values: MS_ASYNC Perform asynchronous writes. MS_SYNC Perform synchronous writes. MS_INVALIDATE Invalidate cached data after writing. Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error. The following errors may be reported: [EBUSY] The MS_INVALIDATE flag was specified and a portion of the specified region was locked with mlock(2). [EINVAL] The specified flags argument was invalid. [EINVAL] The addr parameter was not page aligned. [ENOMEM] Addresses in the specified region are outside the range al- lowed for the address space of the process, or specify one or more pages which are unmapped. [EIO] An I/O error occurred while writing. madvise(2), mincore(2), minherit(2), mprotect(2), munmap(2) The msync() function first appeared in 4.4BSD. It was modified to conform to IEEE Std 1003.1b-1993 ("POSIX") Writes are currently done synchronously even if the MS_ASYNC flag is specified. MirOS BSD #10-current October.
http://www.mirbsd.org/htman/sparc/man2/msync.htm
CC-MAIN-2014-52
refinedweb
237
59.9
> If USE_NO_MINGW_SETJMP_TWO_ARGS is defined, it seems that emacs crash > when byte-compile. I do not know what happened in this case exactly, > sorry. > > In ms-w32.h, some functions are defined to sys_..., > such as chmod -> sys_chmod. > if it is defined before "#include <sys/stat.h>", > functions in sys/stat.h are changed. > It cause warning like "warning: 'sys_chmod' redeclared without > dllimport...", I think. so I include it here. > > __start is entry point that specified by linker option in makefile, > but 64bit gcc does not add '_' to symbol, so change _start to > __start. > > > --- ./nt/inc/sys/time.h.orig 2013-03-26 17:33:23.000000000 +0900 > > +++ ./nt/inc/sys/time.h 2013-03-26 21:46:23.425698700 +0900 > : > > This breaks the MinGW32 build, so please see if the current trunk has > > a better solution for this problem. > > I understand. > > > --- ./src/image.c.orig 2013-03-24 18:16:45.000000000 +0900 > > +++ ./src/image.c 2013-03-26 21:46:23.437698700 +0900 > > @@ -5545,6 +5545,9 @@ > > png_byte **rows; > > }; > > > > +#ifdef _W64 > > +#define _setjmp setjmp > > +#endif > > > > Why is this needed? > > In image.c, _setjmp() is used with 1 arg. It seems that some compile error. > This is also related to the following. > > > --- ./src/lisp.h.orig 2013-03-25 12:31:37.000000000 +0900 > > +++ ./src/lisp.h 2013-03-26 21:46:23.442698700 +0900 > > @@ -2164,7 +2164,11 @@ > > > > #ifdef HAVE__SETJMP > > typedef jmp_buf sys_jmp_buf; > > +#ifdef _W64 > > +# define sys_setjmp(j) setjmp (j) > > +#else > > # define sys_setjmp(j) _setjmp (j) > > +#endif > > # define sys_longjmp(j, v) _longjmp (j, v) > > > > And this? > > If USE_NO_MINGW_SETJMP_TWO_ARGS is not defined, > it seems that _setjmp() need 2 args (see mingw-w64's setjmp.h). > so I change this. I think I found a cleaner way of handling the MinGW64 setjmp interface, committed as trunk revision 112145. The 64-bit MinGW64 build still needs some changes, in nt/configure.bat and elsewhere. But I hope the 32-bit build is OK now. Can you two please see if the latest trunk builds with MinGW64 for you? (Remember to re-run configure.bat, as some changes require that.) If there are any problems left, whether errors or warnings, please post them. TIA
https://lists.gnu.org/archive/html/emacs-devel/2013-03/msg00766.html
CC-MAIN-2019-26
refinedweb
362
79.26
22 December 2011 14:55 [Source: ICIS news] HOUSTON (ICIS)--Williams has agreed to acquire the Laser gas gathering system in the Marcellus Shale basin for about $750m (€578m), the diversified ?xml:namespace> Williams is buying the system from Delphi Midstream Partners, which is owned by American Securities, it said. The Laser system comprises 33 miles of 16-inch natural gas pipeline and associated gathering facilities in As gas production in the Marcellus increases, the Laser system is expected to reach a capacity of 1.3bn cubic feet/day, Williams said. "The acquisition of the Laser system continues our strategy of safely and reliably serving producers through large-scale midstream infrastructure in the Marcellus Shale and other basins," said Rory Miller, Williams’ senior vice president, midstream. Williams expects to close the transaction in first quarter of 2012. In September, Williams announced plans to expand its cracker at ($1 = €0.77) For more on William
http://www.icis.com/Articles/2011/12/22/9519028/williams-to-acquire-marcellus-shale-gas-gathering-system-for.html
CC-MAIN-2014-52
refinedweb
154
52.9
Hi~ Everyone, I want to mass update existing 3,000 rows in attribute table using an excel table without deleting the geometry. Any tools or methods can do that? Thank you Solved! Go to Solution.. as long as you have a common field between the tables, use But it would be a good idea to use Excel to Table to bring the table into your geodatabase first. Yes, you are right. Join the table creates additional fileds, and I just want to use the existing schema for update.Because I have 3,000 rows with 20+ fields, Calculate Field can not update so many things in one-shot. I am looking for some tools in ArcGIS with that function, or some good method can achieve fast update, thank you~ Make the join permanent by saving to a new feature class, then just delete the redundant fields. Deletion is way quicker than calculating. If you need speed, you can do this externally with arcpy and numpy and/or arcpy or numpy, but I would suggest just using what you have and the tools you are familiar with, do the join, save the combined, then delete (fields) what you don't want Sometimes it is also best just to split the data into rows to fix, rows to keep. then extract the rows you are need updating, perform the above operation, delete extra columns then append with the rows that didn't need fixing. Sometimes it is quicker to divide and recombine into a new incarnation. Often people spend too much time, trying to make changing featureclasses work for them when recreating new ones in the desired structure is often quicker Thank you Dan, I will try this way. But it looks like no quick tool can do this...That's a pity. Zhan, is it only one field that you are using as the linking variable between the two tables? Or is it more complicated than that? There is one common field, I want to keep the schema and just update 3000 of the the attribute rows, and there are more than 20 fields in this attribute table with 50000+ rows. I'm guessing you could use a 'simple' python script to do this update but it would be no-turning-back sort of deal (no undo button). Maybe something like this (and I'm sure there's a way to make this more efficient.. I'm just not the most python savvy) #import modules import arcpy from time import strftime # start the timer to see how long the script take print 'Start Script: ' + strftime('%Y-%m-%d %H:%M:%S') # identify variables for feature class and table workspace = 'C:/Users/Name/Documents/GISData.gdb' fc = 'FeatureClassName' # tbl = 'TableName' #if using a table in your geodatabase # if not, use this for a CSV tbl = 'C:/Users/Name/Documents/TableName.csv' # set the workspace environment to our workspace arcpy.env.workspace = workspace # identify fields used for updating - only pick the ones you need # for this example, we are only using 10 fields fieldsFC = ['Field1', 'Field2', 'Field3', 'Field4', 'Field5', 'Field6', 'Field7', 'Field8', 'Field9', 'Field10'] # Here is what we're going to do: # One-by-one, check each row in the Feature Class against # each row in the Table, and update if there are changes... # first, it helps to "zero" out the arrays and the counter fcrow = ['','','','','','','','','',''] tblrow = ['','','','','','','','','',''] counter = 0 # putting this all in a try-catch statement to catch any errors try: with arcpy.da.SearchCursor(tbl, fieldsFC) as tblCursor: for tblrow in tblCursor: with arcpy.da.UpdateCursor(fc, fieldsFC) as fcCursor: fcrow = ['','','','','','','','','',''] # ...need to zero out the row again... # in this example, I am seeing if the first column (tblrow[0]) # matches AND if the fourth column (tblrow[3]) matches, # then I go through with the updating of the rows in the # feature class (fc) with the table rows for fcrow in fcCursor: if (str(tblrow[0]) == str(fcrow[0]) and str(tblrow[3]) == str(fcrow[3])): fcrow[1] = tblrow[1] fcrow[2] = tblrow[2] fcrow[4] = tblrow[4] fcrow[5] = tblrow[5] fcrow[6] = tblrow[6] fcrow[7] = tblrow[7] fcrow[8] = tblrow[8] fcrow[9] = tblrow[9] print('Row number ' + str(fcrow[0]) + ' was updated.') fcCursor.updateRow(fcrow) counter = counter + 1 continue # except statement to catch the errors except Exception: e =sys.exc_info()[1] print(e.args[0]) arcpy.AddError(e.args[0]) except arcpy.ExcecuteError: print(arcpy.GetMessages(2)) # how many rows were updated? print 'Updated ' + str(counter) + ' rows.' # end the timer to see how long the script took print 'Finshed Script: ' + strftime('%Y-%m-%d %H:%M:%S') MidnightYell2003 , for speed, it is better to create a dictionary with the values and use a single update cursor to update the relevant records. I would also recommend to import the Excel to a fgdb table first and not use csv or Excel. Make sure that you don't have to guess the data type.
https://community.esri.com/t5/data-management-questions/mass-update-attribute-table-rows/td-p/383171
CC-MAIN-2020-50
refinedweb
825
68.3
Sometime we need to record something after we give the software to our customers, so when something bad happened, we can find it out without the debug environment. Sometime we want to get some trace information in the release mode of the program. In these circumstances, a logfile is useful. There are lots of logfile code examples here at codeproject and elsewhere. This one is very simple, compared to some of the others. The class CLogFile has only three member functions: a constructor, a destructor and a member called Write. Let's see how to use it. CLogFile Write The constructor needs 3 parameters: CLogFile(LPCTSTR strFile, bool bAppend = FALSE, long lTruncate = 4096) The first parameter is the file name. It could include the absolute path, or just a file name. In the latter case, the logfile would be created in the same directory as the executable. Nothing concerned with the "Current Directory", which always brings me trouble. The second parameter indicate whether the log information should append to the end of the file, if the file already exists. The third parameter indicates the truncate size. If the logfile exceeds this number, it rewinds to the beginning, and continues. The information following writes to the file from the beginning. The second member is the destructor. It does some cleaning jobs. Nothing more to say. The most important member is the third: void Write(LPCTSTR pszFormat, ...) This member writes a line to the logfile. Its usage is just the same as printf. It seems that there could be much to say about this member, but I'd prefer to keep things simple. And it is simple. Anything else is in the code. The thing is 108 lines of code in a .h file, including comments and blank lines. My project was a 24/7 application, so one log file could be to big. So I made new log files every day which meant some months lots of files were made in each directory. Finally I made folders for every year, every month, and a log files are stored in the month folders. I made the following function to change the current file name: void ChangeFile(LPCTSTR strFile, bool bAppend = TRUE, long lTruncate = 4096); These are the same params like old constructor, and when the file name is different from the stored m_filename, first close the old logger file, then open the new in the selected folder. Most inportantly don't forget: #define _DEBUG_LOG TRUE #include "logfile.h" The following code shows an example how can be use this. %02i is a useful choose for month, so the 09 month will not overtake the 10 month.For testing choose systime.wMinute to faster create new folders. CString name; SYSTEMTIME systime; GetLocalTime(&systime); name.Format("%i\\%02i\\log_%02i%02i%02i.txt", systime.wYear, systime.wMonth, systime.wYear, systime.wMonth, systime.wDay); m_log.ChangeFile(name); This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here void CLogFile::CreateDirectories(LPCTSTR filename) { char drivename[4]; char path[MAX_PATH+1]; char name[MAX_PATH+1]; char ext[MAX_PATH+1]; int per=0; char seps[] = "/\\"; char *token; _splitpath( filename, drivename, path, name, ext ); sprintf( drivename, "%s\\", drivename ); _chdir( drivename ); token = strtok( path, seps ); while( token != NULL ) { if( _chdir( token ) == -1 ) { _mkdir( token ); _chdir( token ); } token = strtok( NULL, seps ); } } char m_filename[MAX_PATH]; // CString m_filename void CreateDirectories(LPCTSTR filename); // CreateDirectories(CString filename); void CLogFile::CreateDirectories(CString filename) { CString drivename, path; int per=0; drivename = filename.Left(3); _chdir(drivename); filename = filename.Mid(3); while (!filename.IsEmpty()){ per = filename.Find('\\'); if (per == -1) break; path = filename.Left(per); if (_chdir(path)){ _mkdir(path); _chdir(path); } filename = filename.Mid(per + 1); } } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/2486/A-Simple-LogFile
CC-MAIN-2018-26
refinedweb
675
66.13
Hi everyone, AppCode 2016.3 is here, so download and try it right now! The patch update is also available if you are using 2016.2.4 version. In this post we will cover the most important changes included. Let’s get down to it! Swift Language support AppCode 2016.3 delivers the first part of Swift 3 support, and here you can see which tasks are in progress and are included in this release. Note that some proposals (those that have None in the Affected areas column) do not cause problems visible for the end-user but still require some development activity from our side – that’s why they’re in the Open state. Formatting Reformat your code easily with the new formatting options for the following Swift code constructs: - colons in type annotations, dictionary type declarations and key:valuepairs - closures - chained method calls - function declaration parameters and call arguments - condition clauses Read more about their usage in this blog post or see them in action in our What’s New video below. Create from usage Code generation is one of the essential areas where AppCode is different. Lots of different actions and intentions are already available for Objective-C/C/C++, and with every release more and more of them become available in Swift. Override/Implement ( ^O/ ^I) helps you quickly implement all required methods in the protocol and override multiple methods at once. Live Templates allow you to create reusable and interactive code snippets. Introduce Variable ( ⌥⌘V) refactoring automates extracting Swift expressions into local variables. AppCode 2016.3 adds a new member to our code generation family: Create from usage intentions for Swift functions, variables and properties. Write the signature of a non-existing function (class method or the global one) when prototyping your code, invoke ⌥⏎ and have its declaration created automatically: Need to create a local variable or a global one? Or add a new property to your existing class? Use it in your code, invoke ⌥⏎ and save your time on typing its declaration: Performance improvements In this release we dedicated a lot of time to performance improvements in Swift editing. The first and huge part of improvements is related to resolution performance. Our previous attempt to implement the recursive closure resolve in Swift allowed us to improve the symbol resolution in case of closures, but its performance was far from perfect. We’ve managed to fix a lot of the slow-downs and issues with inaccurate resolution. As a result, even complex Swift files should now be highlighted faster while code entities (including closure shorthands which stopped working in 2016.2.x version) should now be resolved significantly better. SourceKit integration became the second area of performance improvements, as editor optimizations allowed us to clearly reproduce some cases when errors, warnings and fixits in the Swift code were shown very slowly. Now these issues are fixed, so errors and warnings should be shown in the editor much faster than before. Finally, code completion in Swift and mixed code should work much faster in general now, since we heavily optimized completion calculation and caching. UI tests AppCode test runner now supports UI tests. Run all tests in your test file or execute a single one simply by pressing ^⇧R (or debug via ^⇧D). Re-run only failed tests in one click and easily filter them out from successful ones: Sort UI tests by duration: And benefit from the built-in test history: Semantic highlighting Semantic highlighting helps understand how data flow through the code by highlighting each variable/parameter with its own color. You can enable it in Preferences | Editor | Color & Fonts | Language Defaults and use it when writing Swift, Objective-C or C++ code: C/C++ language support We usually focus on new Objective-C and Swift support features. But in addition to these languages, this AppCode release contains lots of new C/C++ support features developed by the CLion team, including: - User-defined literals. - C++14 digit separators support. - Overload resolution support. - Dozens of code analysis improvements. - C11 keywords support including auto-completion. Read more about them in this blog post. Version control The following changes are available for VCS support: - Undo commit action is added to the Log context menu for changes that are not pushed yet. - Ability to restore a deleted local branch. - New option for Git –signoff commits in Commit dialog. - Ability to resolve simple conflicts in one click (non-overlapping changes on one line). - Performance improvements for filtering in Git and Mercurial log, as well as an improved UI. Other changes - San Francisco is now the default font in the Default and Darcula themes, and it is used across all menus. - Editor color schemes bundled in AppCode are now editable by default and do not require you to copy the color scheme first. - The Find in Path dialog now keeps previously used settings (scope, file name filter, context, etc.), regardless of where you call it from. That wraps it up! Check out this short demo to see the new features in action: Your AppCode team JetBrains The Drive to Develop Thanks for the great work, guys. Keep it up! Hi! Apple just released XCode 8.2 which (according to the Events popup of AppCode) not AppCode 2016.3 is not compatible with. About when can we expect a release of a compatible version? We are testing it right now, no compatibility issues found so far. As soon as we will be sure, we will disable this warning. If no critical issues will be found, there is a high probability that warning will be disabled in one of 2016.3.x updates. Cool, thank you! Hello! I have two issues for now: 1. I have in project settings $(SDK_ROOT)/usr/include/libxml2path in Header Search Paths. Xcode and AppCode compiles and runs code successfully, but AppCode shows errors like Cannot find 'libxml'in file where I added libxml library. For example, #include. 2. In Objective-C project I added library Chartsusing Cocoapods. Then created .swift subclass of one of the library classes (imported library to file with import Chartsstatement). Then created Objective-C file subclass of UIView and added .swift class as subview (to use .swift class imported #import "-Swift.h"). The errors says Cannot resolve method alloc for interface ..., Interface ... doesn't have a property ...and Types CGSize* and CGSize are not compatible. As I said project compiles and runs, but these errors (warnings) are annoying. In both cases Xcode doesn’t show warnings or errors as AppCode. How to avoid these errors?
https://blog.jetbrains.com/objc/2016/12/appcode-2016-3-release/
CC-MAIN-2019-22
refinedweb
1,093
56.45
- 16 Apr, 2012 2 commits (as suggested in review) the previous "TODO" comment was a leftover when I moved it from the datasrc test. also moved the callback function to the unnamed namespace as it doesn't have to be referrable from others. - 14 Apr, 2012 6 commits there should be no reason that this test fails, but I'm just making this 100% sure. updated DatabaseUpdater::deleteRRset so it would use the new method when deleting NSEC3. - 13 Apr, 2012 2 commits and extended the test to check if we can actually retrieve added records from that namespace. this is the first step to support updating the NSEC3 namespace of a zone. to help support various cases in addRRset() in a less expensive way, introduced a helper RRParameterConverter class. one simple test case was added to confirm the behavior. - 11 Apr, 2012 1 commit - 05 Apr, 2012 7 commits We had this for a behavior on NetBSD, the new ASIO (already in our source tree) has workaround for it. This is for ticket #1823.). - 04 Apr, 2012 11 commits This was an older version of the branch, and wasn't expected to be a merge target. I'll revert it and then merge the right branch. src/lib/python/isc/notify/tests/testdata/test.sqlite3 -. - 03 Apr, 2012 11. JSON parser improvements Crash of bindctl when the config unset command was invoked. The code of the command was missing, probably never implemented and obviously never tested. it was intended to be a short term hack until we implement getDiffs(), but apparently we forgot to complete the cleanup task. So this is just an originally planned cleanup. note that there was a bug in the test code. The change to diff_add_a_data is not to hide a problem in the tested code, but it's a fix to the test's bug.
https://gitlab.isc.org/sebschrader/kea/-/commits/211934fe44eb6ea5a74857d4312e10b6cd6a3c55
CC-MAIN-2021-25
refinedweb
311
72.56
One-stop solution for NLP practitioners, ML developers and data scientists to build effective NLP systems that can perfo 784 212 7MB English Pages 381 Year 2021 Kickstart your NLP journey by exploring BERT and its variants such as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more w 683 287 40MB Read more 393 40 2MB Read more Take a problem-solving approach to learning all about transformers and get up and running in no time by implementing met 192 79 15MB Read more Kickstart your NLP journey by exploring BERT and its variants such as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more w 120 25 39MB Read more Kickstart your NLP journey by exploring BERT and its variants such as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more w 183 18 13MB Read more Become an AI language understanding expert by mastering the quantum leap of Transformer neural network models Key Featur 637 117 5MB Read more Become an AI language understanding expert by mastering the quantum leap of Transformer neural network models Key Featur 1,309 343 6MB Read more Table of contents : Cover Packt Page Contributors Table of Contents Preface Chapter 1: Essentials of NLP A typical text processing workflow Data collection and labeling Collecting labeled data Development environment setup Enabling GPUs on Google Colab Text normalization Modeling normalized data Tokenization Segmentation in Japanese Modeling tokenized data Stop word removal Modeling data with stop words removed Part-of-speech tagging Modeling data with POS tagging Stemming and lemmatization Vectorizing text Count-based vectorization Modeling after count-based vectorization Term Frequency-Inverse Document Frequency (TF-IDF) Modeling using TF-IDF features Word vectors Pretrained models using Word2Vec embeddings Summary Chapter 2: Understanding Sentiment in Natural Language with BiLSTMs Natural language understanding Bi-directional LSTMs – BiLSTMs RNN building blocks Long short-term memory (LSTM) networks Gated recurrent units (GRUs) Sentiment classification with LSTMs Normalization and vectorization LSTM model with embeddings BiLSTM model Summary Chapter 3: Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Named Entity Recognition The GMB data set Normalizing and vectorizing data A BiLSTM model Conditional random fields (CRFs) NER with BiLSTM and CRFs Implementing the custom CRF layer, loss, and model A custom CRF model A custom loss function for NER using a CRF Implementing custom training Viterbi decoding The probability of the first word label Summary Chapter 4: Transfer Learning with BERT Transfer learning overview Types of transfer learning Domain adaptation Multi-task learning Sequential learning IMDb sentiment analysis with GloVe embeddings GloVe embeddings Creating a pre-trained embedding matrix using GloVe Feature extraction model Fine-tuning model BERT-based transfer learning Encoder-decoder networks Attention model Transformer model The bidirectional encoder representations from transformers (BERT) model Tokenization and normalization with BERT Pre-built BERT classification model Custom model with BERT Summary Chapter 5: Generating Text with RNNs and GPT-2 Generating text – one character at a time Data loading and pre-processing Data normalization and tokenization Training the model Implementing learning rate decay as custom callback Generating text with greedy search Generative Pre-Training (GPT-2) model Generating text with GPT-2 Summary Chapter 6: Text Summarization with Seq2seq Attention and Transformer Networks Naïve-Bayes model for finding keywords Evaluating weakly supervised labels on the training set Generating unsupervised labels for unlabeled data Training BiLSTM on weakly supervised data from Snorkel Summary Chapter 9: Building Conversational AI Applications with Deep Learning Overview of conversational agents Task-oriented or slot-filling systems Question-answering and MRC conversational agents General conversational agents Summary Epilogue Installation and Setup Instructions for Code GitHub location Chapter 1 installation instructions Chapter 2 installation instructions Chapter 3 installation instructions Chapter 4 installation instructions Chapter 5 installation instructions Chapter 6 installation instructions Chapter 7 installation instructions Chapter 8 installation instructions Chapter 9 installation instructions Other Books You May Enjoy Index Advanced Natural Language Processing with TensorFlow 2 Build effective real-world NLP applications using NER, RNNs, seq2seq models, Transformers, and more Ashish Bansal BIRMINGHAM - MUMBAI Advanced Natural Language Processing with TensorFlow 2 Copyright © 2021. Producer: Tushar Gupta Acquisition Editor – Peer Reviews: Divya Mudaliar Content Development Editor: Alex Patterson Technical Editor: Gaurav Gavas Project Editor: Mrunal Dave Proofreader: Safis Editing Indexer: Rekha Nair Presentation Designer: Sandip Tadge First published: February 2021 Production reference: 1290121 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-80020-093-7 packt.com Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development Fully searchable for easy access to vital information • [email protected] for more details. At, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. Contributors About the author Ashish Bansal is the Director of Recommendations at Twitch, where he works on building scalable recommendation systems across a variety of product surfaces, connecting content to people. He has worked on recommendations. In many years of work building hybrid recommendation systems balancing collaborative filtering signals with content-based signals, he has spent a lot of time building NLP systems for extracting content signals. In digital marketing, he built systems to analyze coupons, offers, and subject lines. He has worked on messages, tweets, and news articles among other types of textual data and applying cutting edge NLP techniques. He has over 20 years of experience, with over a decade building ML and Deep Learning systems. Ashish is a guest lecturer at IIT BHU teaching Applied Deep Learning. He has a bachelor's in technology from IIT BHU, and an MBA in marketing from Kellogg School of Management. My father, Prof. B. B. Bansal, said that the best way to test understanding of a subject is to explain it to someone else. This book is dedicated to him, and my Gurus – my mother, my sister, who instilled the love of reading, and my wife, who taught me consider all perspectives. I would like to mention Aditya sir, who instilled the value of hard work, which was invaluable in writing this book while balancing a full-time job and family. I would like to mention Ajeet, my manager at Twitter, and Omar, my manager at Twitch, for their support during the writing of this book. Ashish Agrawal and Subroto Chakravorty helped me tide over issues in code. I would like to thank the technical reviewers for ensuring the quality of the book and the editors for working tirelessly on the book. Tushar Gupta, my acquisitions editor, was instrumental in managing the various challenges along the way. Alex – your encouraging comments kept my morale high! About the reviewers Tony Mullen is an Associate Teaching Professor at The Khoury College of Computer Science at Northeastern University in Seattle. He has been involved in language technology for over 20 years and holds a master's degree in Linguistics from Trinity College, Dublin, and a PhD in natural language processing from the University of Groningen. He has published papers in the fields of sentiment analysis, named entity recognition, computer-assisted language learning, and ontology development, among others. Recently, in addition to teaching and supervising graduate computer science, he has been involved in NLP research in the medical domain and consulted for a startup in language technology. Kumar Shridhar is an NLP researcher at ETH Zürich and founder of NeuralSpace. He believes that an NLP system should comprehend texts as humans do. He is working towards the design of flexible NLP systems making them more robust and interpretable. He also believes that NLP systems should not be restricted to few languages, and with NeuralSpace he is extending NLP capabilities to low-resource languages. Table of Contents Prefacevii Chapter 1: Essentials of NLP 1 A typical text processing workflow Data collection and labeling Collecting labeled data 2 2 3 Development environment setup 4 Enabling GPUs on Google Colab Text normalization Modeling normalized data Tokenization 7 8 11 13 Segmentation in Japanese Modeling tokenized data 13 19 Stop word removal 20 Part-of-speech tagging 26 Modeling data with stop words removed 24 Modeling data with POS tagging 30 Stemming and lemmatization Vectorizing text Count-based vectorization 31 33 34 Modeling after count-based vectorization 35 Term Frequency-Inverse Document Frequency (TF-IDF) 37 Word vectors 40 Modeling using TF-IDF features Pretrained models using Word2Vec embeddings 39 42 Summary44 [i] Table of Contents Chapter 2: Understanding Sentiment in Natural Language with BiLSTMs Natural language understanding Bi-directional LSTMs – BiLSTMs RNN building blocks Long short-term memory (LSTM) networks Gated recurrent units (GRUs) Sentiment classification with LSTMs Loading the data Normalization and vectorization LSTM model with embeddings BiLSTM model 45 46 47 48 50 51 51 52 55 62 65 Summary69 Chapter 3: Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Named Entity Recognition The GMB data set Loading the data Normalizing and vectorizing data A BiLSTM model Conditional random fields (CRFs) NER with BiLSTM and CRFs Implementing the custom CRF layer, loss, and model A custom CRF model A custom loss function for NER using a CRF 71 72 74 75 80 83 87 89 91 93 94 Implementing custom training 95 Viterbi decoding 99 The probability of the first word label 101 Summary104 Chapter 4: Transfer Learning with BERT Transfer learning overview Types of transfer learning Domain adaptation Multi-task learning Sequential learning IMDb sentiment analysis with GloVe embeddings GloVe embeddings Loading IMDb training data Loading pre-trained GloVe embeddings Creating a pre-trained embedding matrix using GloVe Feature extraction model Fine-tuning model [ ii ] 105 106 107 107 108 109 110 111 112 114 115 116 121 Table of Contents BERT-based transfer learning Encoder-decoder networks Attention model Transformer model The bidirectional encoder representations from transformers (BERT) model Tokenization and normalization with BERT Pre-built BERT classification model Custom model with BERT 123 123 125 128 131 133 139 142 Summary147 Chapter 5: Generating Text with RNNs and GPT-2 149 Chapter 6: Text Summarization with Seq2seq Attention and Transformer Networks 185 Generating text – one character at a time 150 Data loading and pre-processing 151 Data normalization and tokenization 152 Training the model 155 Implementing learning rate decay as custom callback 159 Generating text with greedy search 164 Generative Pre-Training (GPT-2) model 171 Generating text with GPT-2 177 Summary183 [ iii ] 186 188 190 193 194 197 199 202 207 210 214 218 221 221 224 225 Table of Contents Loading training data [ iv ] 227 228 229 232 235 239 239 240 241 242 243 245 249 251 253 257 260 263 264 265 267 268 270 270 272 274 281 282 285 286 288 290 291 294 295 296 297 300 304 Table of Contents Naïve-Bayes model for finding keywords Evaluating weakly supervised labels on the training set Generating unsupervised labels for unlabeled data Training BiLSTM on weakly supervised data from Snorkel Summary 306 314 319 322 324 Chapter 9: Building Conversational AI Applications with Deep Learning 327 Chapter 10: Installation and Setup Instructions for Code 345 Overview of conversational agents 328 Task-oriented or slot-filling systems 330 Question-answering and MRC conversational agents 340 General conversational agents 343 Summary344 Epilogue344 GitHub location Chapter 1 installation instructions Chapter 2 installation instructions Chapter 3 installation instructions Chapter 4 installation instructions Chapter 5 installation instructions Chapter 6 installation instructions Chapter 7 installation instructions Chapter 8 installation instructions Chapter 9 installation instructions 346 347 347 347 348 348 348 348 348 349 Other Books You May Enjoy 351 Index355 [v] Preface 2017 was a watershed moment for Natural Language Processing (NLP), with Transformer-and attention-based networks coming to the fore. The past few years have been as transformational for NLP as AlexNet was for computer vision in 2012. Tremendous advances in NLP have been made, and we are now moving from research labs into applications. These advances span the domains of Natural Language Understanding (NLU), Natural Language Generation (NLG), and Natural Language Interaction (NLI). With so much research in all of these domains, it can be a daunting task to understand the exciting developments in NLP. This book is focused on cutting-edge applications in the fields of NLP, language generation, and dialog systems. It covers the concepts of pre-processing text using techniques such as tokenization, parts-of-speech (POS) tagging, and lemmatization using popular libraries such as Stanford NLP and spaCy. Named Entity Recognition (NER) models are built from scratch using Bi-directional Long Short-Term Memory networks (BiLSTMs), Conditional Random Fields (CRFs), and Viterbi decoding. Taking a very practical, application-focused perspective, the book covers key emerging areas such as generating text for use in sentence completion and text summarization, multi-modal networks that bridge images and text by generating captions for images, and managing the dialog aspects of chatbots. It covers one of the most important reasons behind recent advances of NLP – transfer learning and fine tuning. Unlabeled textual data is easily available but labeling this data is costly. This book covers practical techniques that can simplify the labeling of textual data. By the end of the book, I hope you will have advanced knowledge of the tools, techniques, and deep learning architectures used to solve complex NLP problems. The book will cover encoder-decoder networks, Long Short-Term Memory networks (LSTMs) and BiLSTMs, CRFs, BERT, GPT-2, GPT-3, Transformers, and other key technologies using TensorFlow. [ vii ] Preface Advanced TensorFlow techniques required for building advanced models are also covered: • Building custom models and layers • Building custom loss functions • Implementing learning rate annealing • Using tf.data for loading data efficiently • Checkpointing models to enable long training times (usually several days) This book contains working code that can be adapted to your own use cases. I hope that you will even be able to do novel state-of-the-art research using the skills you'll gain as you progress through the book. Who this book is for This book assumes that the reader has some familiarity with the basics of deep learning and the fundamental concepts of NLP. This book focuses on advanced applications and building NLP systems that can solve complex tasks. All kinds of readers will be able to follow the content of the book, but readers who can benefit the most from this book include: • Intermediate Machine Learning (ML) developers who are familiar with the basics of supervised learning and deep learning techniques • Professionals who already use TensorFlow/Python for purposes such as data science, ML, research, analysis, etc., and can benefit from a more solid understanding of advanced NLP techniques What this book covers Chapter 1, Essentials of NLP, provides an overview of various topics in NLP such as tokenization, stemming, lemmatization, POS tagging, vectorization, etc. An overview of common NLP libraries like spaCy, Stanford NLP, and NLTK, with their key capabilities and use cases, will be provided. We will also build a simple classifier for spam. Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, covers the NLU use case of sentiment analysis with an overview of Recurrent Neural Networks (RNNs), LSTMs, and BiLSTMs, which are the basic building blocks of modern NLP models. We will also use tf.data for efficient use of CPUs and GPUs to speed up data pipelines and model training. [ viii ] Preface Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding, focuses on the key NLU problem of NER, which is a basic building block of taskoriented chatbots. We will build a custom layer for CRFs for improving the accuracy of NER and the Viterbi decoding scheme, which is often applied to a deep model to improve the quality of the output. Chapter 4, Transfer Learning with BERT, covers a number of important concepts in modern deep NLP such as types of transfer learning, pre-trained embeddings, an overview of Transformers, and BERT and its application in improving the sentiment analysis task introduced in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs. Chapter 5, Generating Text with RNNs and GPT-2, focuses on generating text with a custom character-based RNN and improving it with Beam Search. We will also cover the GPT-2 architecture and touch upon GPT-3. Chapter 6, Text Summarization with Seq2seq Attention and Transformer Networks, takes on the challenging task of abstractive text summarization. BERT and GPT are two halves of the full encoder-decoder model. We put them together to build a seq2seq model for summarizing news articles by generating headlines for them. How ROUGE metrics are used for the evaluation of summarization is also covered. Chapter 7, Multi-Modal Networks and Image Captioning with ResNets and Transformers, combines computer vision and NLP together to see if a picture is indeed worth a thousand words! We will build a custom Transformer model from scratch and train it to generate captions for images. Chapter 8, Weakly Supervised Learning for Classification with Snorkel, focuses on a key problem – labeling data. While NLP has a lot of unlabeled data, labeling it is quite an expensive task. This chapter introduces the snorkel library and shows how massive amounts of data can be quickly labeled. Chapter 9, Building Conversational AI Applications with Deep Learning, combines the various techniques covered throughout the book to show how different types of chatbots, such as question-answering or slot-filling bots, can be built. Chapter 10, Installation and Setup Instructions for Code, walks through all the instructions required to install and configure a system for running the code supplied with the book. [ ix ] Preface To get the most out of this book • It would be a good idea to get a background on the basics of deep learning models and TensorFlow. • The use of a GPU is highly recommended. Some of the models, especially in the later chapters, are pretty big and complex. They may take hours or days to fully train on CPUs. RNNs are very slow to train without the use of GPUs. You can get access to free GPUs on Google Colab, and instructions for doing so are provided in the first chapter. Download the example code files The code bundle for the book is hosted on GitHub at PacktPublishing/Advanced-Natural-Language-Processing-with-TensorFlow-2. We also have other code bundles from our rich catalog of books and videos available at. Check them out! Download the color images We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: downloads/9781800200937_ColorImages.pdf. Conventions used There are a number of text conventions used throughout this book. CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. For example: "In the num_capitals() function, substitutions are performed for the capital letters in English." A block of code is set as follows: en = snlp.Pipeline(lang='en') def word_counts(x, pipeline=en): doc = pipeline(x) count = sum([len(sentence.tokens) for sentence in doc.sentences]) return count [x] Preface When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: en = snlp.Pipeline(lang='en') def word_counts(x, pipeline=en): doc = pipeline(x) count = sum([len(sentence.tokens) for sentence in doc.sentences]) return count Any command-line input or output is written as follows: !pip install gensim Bold: Indicates a new term, an important word, or words that you see on the screen, for example, in menus or dialog boxes, also appear in the text like this. For example: "Select System info from the Administration panel." Warnings or important notes appear like this. Tips and tricks appear like this. Get in touch Feedback from our readers is always welcome. General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email Packt at [email protected] com. Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you could report this to us. Please visit, select your book, click on the Errata Submission Form link, and enter the details. [ xi ] Preface Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected]. [ xii ] 1 Essentials of NLP Language has been a part of human evolution. The development of language allowed better communication between people and tribes. The evolution of written language, initially as cave paintings and later as characters, allowed information to be distilled, stored, and passed on from generation to generation. Some would even say that the hockey stick curve of advancement is because of the ever-accumulating cache of stored information. As this stored information trove becomes larger and larger, the need for computational methods to process and distill the data becomes more acute. In the past decade, a lot of advances were made in the areas of image and speech recognition. Advances in Natural Language Processing (NLP) are more recent, though computational methods for NLP have been an area of research for decades. Processing textual data requires many different building blocks upon which advanced models can be built. Some of these building blocks themselves can be quite challenging and advanced. This chapter and the next focus on these building blocks and the problems that can be solved with them through simple models. In this chapter, we will focus on the basics of pre-processing text and build a simple spam detector. Specifically, we will learn about the following: • The typical text processing workflow • Data collection and labeling • Text normalization, including case normalization, text tokenization, stemming, and lemmatization • Modeling datasets that have been text normalized • Vectorizing text • Modeling datasets with vectorized text [1] Essentials of NLP Let's start by getting to grips with the text processing workflow most NLP models use. A typical text processing workflow To understand how to process text, it is important to understand the general workflow for NLP. The following diagram illustrates the basic steps: Figure 1.1: Typical stages of a text processing workflow The first two steps of the process in the preceding diagram involve collecting labeled data. A supervised model or even a semi-supervised model needs data to operate. The next step is usually normalizing and featurizing the data. Models have a hard time processing text data as is. There is a lot of hidden structure in a given text that needs to be processed and exposed. These two steps focus on that. The last step is building a model with the processed inputs. While NLP has some unique models, this chapter will use only a simple deep neural network and focus more on the normalization and vectorization/featurization. Often, the last three stages operate in a cycle, even though the diagram may give the impression of linearity. In industry, additional features require more effort to develop and more resources to keep running. Hence, it is important that features add value. Taking this approach, we will use a simple model to validate different normalization/vectorization/ featurization steps. Now, let's look at each of these stages in detail. Data collection and labeling The first step of any Machine Learning (ML) project is to obtain a dataset. Fortunately, in the text domain, there is plenty of data to be found. A common approach is to use libraries such as scrapy or Beautiful Soup to scrape data from the web. However, data is usually unlabeled, and as such can't be used in supervised models directly. This data is quite useful though. Through the use of transfer learning, a language model can be trained using unsupervised or semi-supervised methods and can be further used with a small training dataset specific to the task at hand. We will cover transfer learning in more depth in Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding, when we look at transfer learning using BERT embeddings. [2] Chapter 1 In the labeling step, textual data sourced in the data collection step is labeled with the right classes. Let's take some examples. If the task is to build a spam classifier for emails, then the previous step would involve collecting lots of emails. This labeling step would be to attach a spam or not spam label to each email. Another example could be sentiment detection on tweets. The data collection step would involve gathering a number of tweets. This step would label each tweet with a label that acts as a ground truth. A more involved example would involve collecting news articles, where the labels would be summaries of the articles. Yet another example of such a case would be an email auto-reply functionality. Like the spam case, a number of emails with their replies would need to be collected. The labels in this case would be short pieces of text that would approximate replies. If you are working on a specific domain without much public data, you may have to do these steps yourself. Given that text data is generally available (outside of specific domains like health), labeling is usually the biggest challenge. It can be quite time consuming or resource intensive to label data. There has been a lot of recent focus on using semi-supervised approaches to labeling data. We will cover some methods for labeling data at scale using semi-supervised methods and the snorkel library in Chapter 7, Multi-modal Networks and Image Captioning with ResNets and Transformer, when we look at weakly supervised learning for classification using Snorkel. There is a number of commonly used datasets that are available on the web for use in training models. Using transfer learning, these generic datasets can be used to prime ML models and then you can use a small amount of domain-specific data to finetune the model. Using these publicly available datasets gives us a few advantages. First, all the data collection has been already performed. Second, labeling has already been done. Lastly, using such a dataset allows the comparison of results with the state of the art; most papers use specific datasets in their area of research and publish benchmarks. For example, the Stanford Question Answering Dataset (or SQuAD for short) is often used as a benchmark for question-answering models. It is a good source to train on as well. Collecting labeled data In this book, we will rely on publicly available datasets. The appropriate datasets will be called out in their respective chapters along with instructions on downloading them. To build a spam detection system on an email dataset, we will be using the SMS Spam Collection dataset made available by University of California, Irvine. This dataset can be downloaded using instructions available in the tip box below. Each SMS is tagged as "SPAM" or "HAM," with the latter indicating it is not a spam message. [3] Essentials of NLP University of California, Irvine, is a great source of machine learning datasets. You can see all the datasets they provide by visiting. Specifically for NLP, you can see some publicly available datasets on. Before we start working with the data, the development environment needs to be set up. Let's take a quick moment to set up the development environment. Development environment setup In this chapter, we will be using Google Colaboratory, or Colab for short, to write code. You can use your Google account, or register a new account. Google Colab is free to use, requires no configuration, and also provides access to GPUs. The user interface is very similar to a Jupyter notebook, so it should seem familiar. To get started, please navigate to colab.research.google.com using a supported web browser. A web page similar to the screenshot below should appear: Figure 1.2: Google Colab website [4] Chapter 1 The next step is to create a new notebook. There are a couple of options. The first option is to create a new notebook in Colab and type in the code as you go along in the chapter. The second option is to upload a notebook from the local drive into Colab. It is also possible to pull in notebooks from GitHub into Colab, the process for which is detailed on the Colab website. For the purposes of this chapter, a complete notebook named SMS_Spam_Detection.ipynb is available in the GitHub repository of the book in the chapter1-nlp-essentials folder. Please upload this notebook into Google Colab by clicking File | Upload Notebook. Specific sections of this notebook will be referred to at the appropriate points in the chapter in tip boxes. The instructions for creating the notebook from scratch are in the main description. Click on the File menu option at the top left and click on New Notebook. A new notebook will open in a new browser tab. Click on the notebook name at the top left, just above the File menu option, and edit it to read SMS_Spam_Detection. Now the development environment is set up. It is time to begin loading in data. First, let us edit the first line of the notebook and import TensorFlow 2. Enter the following code in the first cell and execute it: %tensorflow_version 2.x import tensorflow as tf import os import io tf.__version__ The output of running this cell should look like this: TensorFlow 2.x is selected. '2.4.0' This confirms that version 2.4.0 of the TensorFlow library was loaded. The highlighted line in the preceding code block is a magic command for Google Colab, instructing it to use TensorFlow version 2+. The next step is to download the data file and unzip to a location in the Colab notebook on the cloud. The code for loading the data is in the Download Data section of the notebook. Also note that as of writing, the release version of TensorFlow was 2.4. [5] Essentials of NLP This can be done with the following code: # Download the zip file path_to_zip = tf.keras.utils.get_file("smsspamcollection.zip", origin="", extract=True) # Unzip the file into a folder !unzip $path_to_zip -d data The following output confirms that the data was downloaded and extracted: Archive: /root/.keras/datasets/smsspamcollection.zip inflating: data/SMSSpamCollection inflating: data/readme Reading the data file is trivial: # Let's see if we read the data correctly lines = io.open('data/SMSSpamCollection').read().strip().split('\n') lines[0] The last line of code shows a sample line of data: 'ham\tGo until jurong point, crazy.. Available only in bugis n great world' This example is labeled as not spam. The next step is to split each line into two columns – one with the text of the message and the other as the label. While we are separating these labels, we will also convert the labels to numeric values. Since we are interested in predicting spam messages, we can assign a value of 1 to the spam messages. A value of 0 will be assigned to legitimate messages. The code for this part is in the Pre-Process Data section of the notebook. Please note that the following code is verbose for clarity: spam_dataset = [] for line in lines: label, text = line.split('\t') [6] Chapter 1 if label.strip() == 'spam': spam_dataset.append((1, text.strip())) else: spam_dataset.append(((0, text.strip()))) print(spam_dataset[0]) (0, 'Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...') Now the dataset is ready for further processing in the pipeline. However, let's take a short detour to see how to configure GPU access in Google Colab. Enabling GPUs on Google Colab One of the advantages of using Google Colab is access to free GPUs for small tasks. GPUs make a big difference in the training time of NLP models, especially ones that use Recurrent Neural Networks (RNNs). The first step in enabling GPU access is to start a runtime, which can be done by executing a command in the notebook. Then, click on the Runtime menu option and select the Change Runtime option, as shown in the following screenshot: Figure 1.3: Colab runtime settings menu option [7] Essentials of NLP Next, a dialog box will show up, as shown in the following screenshot. Expand the Hardware Accelerator option and select GPU: Figure 1.4: Enabling GPUs on Colab Now you should have access to a GPU in your Colab notebook! In NLP models, especially when using RNNs, GPUs can shave a lot of minutes or hours off the training time. For now, let's turn our attention back to the data that has been loaded and is ready to be processed further for use in models. Text normalization Text normalization is a pre-processing step aimed at improving the quality of the text and making it suitable for machines to process. Four main steps in text normalization are case normalization, tokenization and stop word removal, Parts-of-Speech (POS) tagging, and stemming. Case normalization applies to languages that use uppercase and lowercase letters. All languages based on the Latin alphabet or the Cyrillic alphabet (Russian, Mongolian, and so on) use upper- and lowercase letters. Other languages that sometimes use this are Greek, Armenian, Cherokee, and Coptic. In case normalization, all letters are converted to the same case. It is quite helpful in semantic use cases. However, in other cases, this may hinder performance. In the spam example, spam messages may have more words in all-caps compared to regular messages. [8] Chapter 1 Another common normalization step removes punctuation in the text. Again, this may or may not be useful given the problem at hand. In most cases, this should give good results. However, in some cases, such as spam or grammar models, it may hinder performance. It is more likely for spam messages to use more exclamation marks or other punctuation for emphasis. The code for this part is in the Data Normalization section of the notebook. Let's build a baseline model with three simple features: • Number of characters in the message • Number of capital letters in the message • Number of punctuation symbols in the message To do so, first, we will convert the data into a pandas DataFrame: import pandas as pd df = pd.DataFrame(spam_dataset, columns=['Spam', 'Message']) Next, let's build some simple functions that can count the length of the message, and the numbers of capital letters and punctuation symbols. Python's regular expression package, re, will be used to implement these: import re def message_length(x): # returns total number of characters return len(x) def num_capitals(x): _, count = re.subn(r'[A-Z]', '', x) # only works in english return count def num_punctuation(x): _, count = re.subn(r'\W', '', x) return count In the num_capitals() function, substitutions are performed for the capital letters in English. The count of these substitutions provides the count of capital letters. The same technique is used to count the number of punctuation symbols. Please note that the method used to count capital letters is specific to English. [9] Essentials of NLP Additional feature columns will be added to the DataFrame, and then the set will be split into test and train sets: df['Capitals'] = df['Message'].apply(num_capitals) df['Punctuation'] = df['Message'].apply(num_punctuation) df['Length'] = df['Message'].apply(message_length) df.describe() This should generate the following output: Figure 1.5: Base dataset for initial spam model The following code can be used to split the dataset into training and test sets, with 80% of the records in the training set and the rest in the test set. Further more, labels will be removed from both the training and test sets: train=df.sample(frac=0.8,random_state=42) test=df.drop(train.index) x_train = train[['Length', 'Capitals', 'Punctuation']] y_train = train[['Spam']] x_test = test[['Length', 'Capitals', 'Punctuation']] y_test = test[['Spam']] Now we are ready to build a simple classifier to use this data. [ 10 ] Chapter 1 Modeling normalized data Recall that modeling was the last part of the text processing pipeline described earlier. In this chapter, we will use a very simple model, as the objective is to show different basic NLP data processing techniques more than modeling. Here, we want to see if three simple features can aid in the classification of spam. As more features are added, passing them through the same model will help in seeing if the featurization aids or hampers the accuracy of the classification. The Model Building section of the workbook has the code shown in this section. A function is defined that allows the construction of models with different numbers of inputs and hidden units: # Basic 1-layer neural network model for evaluation def make_model(input_dims=3, num_units=12): model = tf.keras.Sequential() # Adds a densely-connected layer with 12 units to the model: model.add(tf.keras.layers.Dense(num_units, input_dim=input_dims, activation='relu')) # Add a sigmoid layer with a binary output unit: model.add(tf.keras.layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model This model uses binary cross-entropy for computing loss and the Adam optimizer for training. The key metric, given that this is a binary classification problem, is accuracy. The default parameters passed to the function are sufficient as only three features are being passed in. [ 11 ] Essentials of NLP We can train our simple baseline model with only three features like so: model = make_model() model.fit(x_train, y_train, epochs=10, batch_size=10) Train on 4459 samples Epoch 1/10 4459/4459 [==============================] - 1s 281us/sample - loss: 0.6062 - accuracy: 0.8141 Epoch 2/10 … Epoch 10/10 4459/4459 [==============================] - 1s 145us/sample - loss: 0.1976 - accuracy: 0.9305 This is not bad as our three simple features help us get to 93% accuracy. A quick check shows that there are 592 spam messages in the test set, out of a total of 4,459. So, this model is doing better than a very simple model that guesses everything as not spam. That model would have an accuracy of 87%. This number may be surprising but is fairly common in classification problems where there is a severe class imbalance in the data. Evaluating it on the training set gives an accuracy of around 93.4%: model.evaluate(x_test, y_test) 1115/1115 [==============================] - 0s 94us/sample - loss: 0.1949 - accuracy: 0.9336 [0.19485870356516988, 0.9336323] Please note that the actual performance you see may be slightly different due to the data splits and computational vagaries. A quick verification can be performed by plotting the confusion matrix to see the performance: y_train_pred = model.predict_classes(x_train) # confusion matrix tf.math.confusion_matrix(tf.constant(y_train.Spam), y_train_pred) [ 12 ] Chapter 1 Predicted Not Spam Predicted Spam Actual Not Spam 3,771 96 Actual Spam 186 406 This shows that 3,771 out of 3,867 regular messages were classified correctly, while 406 out of 592 spam messages were classified correctly. Again, you may get a slightly different result. To test the value of the features, try re-running the model by removing one of the features, such as punctuation or a number of capital letters, to get a sense of their contribution to the model. This is left as an exercise for the reader. Tokenization This step takes a piece of text and converts it into a list of tokens. If the input is a sentence, then separating the words would be an example of tokenization. Depending on the model, different granularities can be chosen. At the lowest level, each character could become a token. In some cases, entire sentences of paragraphs can be considered as a token: Figure 1.6: Tokenizing a sentence The preceding diagram shows two ways a sentence can be tokenized. One way to tokenize is to chop a sentence into words. Another way is to chop into individual characters. However, this can be a complex proposition in some languages such as Japanese and Mandarin. Segmentation in Japanese Many languages use a word separator, a space, to separate words. This makes the task of tokenizing on words trivial. However, there are other languages that do not use any markers or separators between words. Some examples of such languages are Japanese and Chinese. In such languages, the task is referred to as segmentation. [ 13 ] Essentials of NLP Specifically, in Japanese, there are mainly three different types of characters that are used: Hiragana, Kanji, and Katakana. Kanji is adapted from Chinese characters, and similar to Chinese, there are thousands of characters. Hiragana is used for grammatical elements and native Japanese words. Katakana is mostly used for foreign words and names. Depending on the preceding characters, a character may be part of an existing word or the start of a new word. This makes Japanese one of the most complicated writing systems in the world. Compound words are especially hard. Consider the following compound word that reads Election Administration Committee: 選挙管理委員会 This can be tokenized in two different ways, outside of the entire phrase being considered one word. Here are two examples of tokenizing (from the Sudachi library): 選挙/管理/委員会 (Election / Administration / Committee) 選挙/管理/委員/会 (Election / Administration / Committee / Meeting) Common libraries that are used specifically for Japanese segmentation or tokenization are MeCab, Juman, Sudachi, and Kuromoji. MeCab is used in Hugging Face, spaCy, and other libraries. The code shown in this section is in the Tokenization and Stop Word Removal section of the notebook. Fortunately, most languages are not as complex as Japanese and use spaces to separate words. In Python, splitting by spaces is trivial. Let's take an example: Sentence = 'Go until Jurong point, crazy.. Available only in bugis n great world' sentence.split() The output of the preceding split operation results in the following: ['Go', 'until', 'jurong', 'point,', 'crazy..', 'Available', 'only', 'in', [ 14 ] Chapter 1 'bugis', 'n', 'great', 'world'] The two highlighted lines in the preceding output show that the naïve approach in Python will result in punctuation being included in the words, among other issues. Consequently, this step is done through a library like StanfordNLP. Using pip, let's install this package in our Colab notebook: !pip install stanfordnlp The StanfordNLP package uses PyTorch under the hood as well as a number of other packages. These and other dependencies will be installed. By default, the package does not install language files. These have to be downloaded. This is shown in the following code: Import stanfordnlp as snlp en = snlp.download('en') The English file is approximately 235 MB. A prompt will be displayed to confirm the download and the location to store it in: Figure 1.7: Prompt for downloading English models Google Colab recycles the runtimes upon inactivity. This means that if you perform commands in the book at different times, you may have to re-execute every command again from the start, including downloading and processing the dataset, downloading the StanfordNLP English files, and so on. A local notebook server would usually maintain the state of the runtime but may have limited processing power. For simpler examples as in this chapter, Google Colab is a decent solution. For the more advanced examples later in the book, where training may run for hours or days, a local runtime or one running on a cloud Virtual Machine (VM) would be preferred. This package provides capabilities for tokenization, POS tagging, and lemmatization out of the box. To start with tokenization, we instantiate a pipeline and tokenize a sample text to see how this works: en = snlp.Pipeline(lang='en', processors='tokenize') [ 15 ] Essentials of NLP The lang parameter is used to indicate that an English pipeline is desired. The second parameter, processors, indicates the type of processing that is desired in the pipeline. This library can also perform the following processing steps in the pipeline: • pos labels each token with a POS token. The next section provides more details on POS tags. • lemma, which can convert different forms of verbs, for example, to the base form. This will be covered in detail in the Stemming and lemmatization section later in this chapter. • depparse performs dependency parsing between words in a sentence. Consider the following example sentence, "Hari went to school." Hari is interpreted as a noun by the POS tagger, and becomes the governor of the word went. The word school is dependent on went as it describes the object of the verb. For now, only tokenization of text is desired, so only the tokenizer is used: tokenized = en(sentence) len(tokenized.sentences) 2 This shows that the tokenizer correctly divided the text into two sentences. To investigate what words were removed, the following code can be used: for snt in tokenized.sentences: for word in snt.tokens: print(word.text) print("") Go until jurong point , crazy .. Available only in bugis n [ 16 ] Chapter 1 great world Note the highlighted words in the preceding output. Punctuation marks were separated out into their own words. Text was split into multiple sentences. This is an improvement over only using spaces to split. In some applications, removal of punctuation may be required. This will be covered in the next section. Consider the preceding example of Japanese. To see the performance of StanfordNLP on Japanese tokenization, the following piece of code can be used: jp = snlp.download('ja') This is the first step, which involves downloading the Japanese language model, similar to the English model that was downloaded and installed previously. Next, a Japanese pipeline will be instantiated and the words will be processed: jp = snlp.download('ja') jp_line = jp("選挙管理委員会 ") You may recall that the Japanese text reads Election Administration Committee. Correct tokenization should produce three words, where first two should be two characters each, and the last word is three characters: for snt in jp_line.sentences: for word in snt.tokens: print(word.text) 選挙 管理 委員会 This matches the expected output. StanfordNLP supports 53 languages, so the same code can be used for tokenizing any language that is supported. Coming back to the spam detection example, a new feature can be implemented that counts the number of words in the message using this tokenization functionality. This word count feature is implemented in the Adding Word Count Feature section of the notebook. [ 17 ] Essentials of NLP It is possible that spam messages have different numbers of words than regular messages. The first step is to define a method to compute the number of words: en = snlp.Pipeline(lang='en') def word_counts(x, pipeline=en): doc = pipeline(x) count = sum([len(sentence.tokens) for sentence in doc.sentences]) return count Next, using the train and test splits, add a column for the word count feature: train['Words'] = train['Message'].apply(word_counts) test['Words'] = test['Message'].apply(word_counts) x_train = train[['Length', 'Punctuation', 'Capitals', 'Words']] y_train = train[['Spam']] x_test = test[['Length', 'Punctuation', 'Capitals' , 'Words']] y_test = test[['Spam']] model = make_model(input_dims=4) The last line in the preceding code block creates a new model with four input features. PyTorch warning When you execute functions in the StanfordNLP library, you may see a warning like this: /pytorch/aten/src/ATen/native/LegacyDefinitions. cpp:19: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. Internally, StanfordNLP uses the PyTorch library. This warning is due to StanfordNLP using an older version of a function that is now deprecated. For all intents and purposes, this warning can be ignored. It is expected that maintainers of StanfordNLP will update their code. [ 18 ] Chapter 1 Modeling tokenized data This model can be trained like so: model.fit(x_train, y_train, epochs=10, batch_size=10) Train on 4459 samples Epoch 1/10 4459/4459 [==============================] - 1s 202us/sample - loss: 2.4261 - accuracy: 0.6961 ... Epoch 10/10 4459/4459 [==============================] - 1s 142us/sample - loss: 0.2061 - accuracy: 0.9312 There is only a marginal improvement in accuracy. One hypothesis is that the number of words is not useful. It would be useful if the average number of words in spam messages were smaller or larger than regular messages. Using pandas, this can be quickly verified: train.loc[train.Spam == 1].describe() Figure 1.8: Statistics for spam message features Let's compare the preceding results to the statistics for regular messages: train.loc[train.Spam == 0].describe() [ 19 ] Essentials of NLP Figure 1.9: Statistics for regular message features Some interesting patterns can quickly be seen. Spam messages usually have much less deviation from the mean. Focus on the Capitals feature column. It shows that regular messages use far fewer capitals than spam messages. At the 75th percentile, there are 3 capitals in a regular message versus 21 for spam messages. On average, regular messages have 4 capital letters while spam messages have 15. This variation is much less pronounced in the number of words category. Regular messages have 17 words on average, while spam has 29. At the 75th percentile, regular messages have 22 words while spam messages have 35. This quick check yields an indication as to why adding the word features wasn't that useful. However, there are a couple of things to consider still. First, the tokenization model split out punctuation marks as words. Ideally, these words should be removed from the word counts as the punctuation feature is showing that spam messages use a lot more punctuation characters. This will be covered in the Parts-of-speech tagging section. Secondly, languages have some common words that are usually excluded. This is called stop word removal and is the focus of the next section. Stop word removal Stop word removal involves removing common words such as articles (the, an) and conjunctions (and, but), among others. In the context of information retrieval or search, these words would not be helpful in identifying documents or web pages that would match the query. As an example, consider the query "Where is Google based?". In this query, is is a stop word. The query would produce similar results irrespective of the inclusion of is. To determine the stop words, a simple approach is to use grammar clues. [ 20 ] Chapter 1 In English, articles and conjunctions are examples of classes of words that can usually be removed. A more robust way is to consider the frequency of occurrence of words in a corpus, set of documents, or text. The most frequent terms can be selected as candidates for the stop word list. It is recommended that this list be reviewed manually. There can be cases where words may be frequent in a collection of documents but are still meaningful. This can happen if all the documents in the collection are from a specific domain or on a specific topic. Consider a set of documents from the Federal Reserve. The word economy may appear quite frequently in this case; however, it is unlikely to be a candidate for removal as a stop word. In some cases, stop words may actually contain information. This may be applicable to phrases. Consider the fragment "flights to Paris." In this case, to provides valuable information, and its removal may change the meaning of the fragment. Recall the stages of the text processing workflow. The step after text normalization is vectorization. This step is discussed in detail later in the Vectorizing text section of this chapter, but the key step in vectorization is to build a vocabulary or dictionary of all the tokens. The size of this vocabulary can be reduced by removing stop words. While training and evaluating models, removing stop words reduces the number of computation steps that need to be performed. Hence, the removal of stop words can yield benefits in terms of computation speed and storage space. Modern advances in NLP see smaller and smaller stop words lists as more efficient encoding schemes and computation methods evolve. Let's try and see the impact of stop words on the spam problem to develop some intuition about its usefulness. Many NLP packages provide lists of stop words. These can be removed from the text after tokenization. Tokenization was done through the StanfordNLP library previously. However, this library does not come with a list of stop words. NLTK and spaCy supply stop words for a set of languages. For this example, we will use an open source package called stopwordsiso. The Stop Word Removal section of the notebook contains the code for this section. This Python package takes the list of stop words from the stopwords-iso GitHub project at. This package provides stop words in 57 languages. The first step is to install the Python package that provides access to the stop words lists. [ 21 ] Essentials of NLP The following command will install the package through the notebook: !pip install stopwordsiso Supported languages can be checked with the following commands: import stopwordsiso as stopwords stopwords.langs() English language stop words can be checked as well to get an idea of some of the words: sorted(stopwords.stopwords('en')) ["'ll", "'tis", "'twas", "'ve", '10', '39', 'a', "a's", 'able', 'ableabout', 'about', 'above', 'abroad', 'abst', 'accordance', 'according', 'accordingly', 'across', 'act', 'actually', 'ad', 'added', ... Given that tokenization was already implemented in the preceding word_counts() method, the implementation of that method can be updated to include removing stop words. However, all the stop words are in lowercase. Case normalization was discussed earlier, and capital letters were a useful feature for spam detection. In this case, tokens need to be converted to lowercase to effectively remove them: [ 22 ] Chapter 1 en_sw = stopwords.stopwords('en') def word_counts(x, pipeline=en): doc = pipeline(x) count = 0 for sentence in doc.sentences: for token in sentence.tokens: if token.text.lower() not in en_sw: count += 1 return count A consequence of using stop words is that a message such as "When are you going to ride your bike?" counts as only 3 words. When we see if this has had any effect on the statistics for word length, the following picture emerges: Figure 1.10: Word counts for spam messages after removing stop words Compared to the word counts prior to stop word removal, the average number of words has been reduced from 29 to 18, almost a 30% decrease. The 25th percentile changed from 26 to 14. The maximum has also reduced from 49 to 33. [ 23 ] Essentials of NLP The impact on regular messages is even more dramatic: Figure 1.11: Word counts for regular messages after removing stop words Comparing these statistics to those from before stop word removal, the average number of words has more than halved to almost 8. The maximum number of words has also reduced from 209 to 147. The standard deviation of regular messages is about the same as its mean, indicating that there is a lot of variation in the number of words in regular messages. Now, let's see if this helps us train a model and improve its accuracy. Modeling data with stop words removed Now that the feature without stop words is computed, it can be added to the model to see its impact: train['Words'] = train['Message'].apply(word_counts) test['Words'] = test['Message'].apply(word_counts) x_train = train[['Length', 'Punctuation', 'Capitals', 'Words']] y_train = train[['Spam']] x_test = test[['Length', 'Punctuation', 'Capitals', 'Words']] y_test = test[['Spam']] model = make_model(input_dims=4) model.fit(x_train, y_train, epochs=10, batch_size=10) [ 24 ] Chapter 1 Epoch 1/10 4459/4459 [==============================] - 2s 361us/sample - loss: 0.5186 - accuracy: 0.8652 Epoch 2/10 ... Epoch 9/10 4459/4459 [==============================] - 2s 355us/sample - loss: 0.1790 - accuracy: 0.9417 Epoch 10/10 4459/4459 [==============================] - 2s 361us/sample - loss: 0.1802 - accuracy: 0.9421 This accuracy reflects a slight improvement over the previous model: model.evaluate(x_test, y_test) 1115/1115 [==============================] - 0s 74us/sample - loss: 0.1954 - accuracy: 0.9372 [0.19537461110027382, 0.93721974] In NLP, stop word removal used to be standard practice. In more modern applications, stop words may actually end up hindering performance in some use cases, rather than helping. It is becoming more common not to exclude stop words. Depending on the problem you are solving, stop word removal may or may not help. Note that StanfordNLP will separate words like can't into ca and n't. This represents the expansion of the short form into its constituents, can and not. These contractions may or may not appear in the stop word list. Implementing a more robust stop word detector is left to the reader as an exercise. StanfordNLP uses a supervised RNN with Bi-directional Long Short-Term Memory (BiLSTM) units. This architecture uses a vocabulary to generate embeddings through the vectorization of the vocabulary. The vectorization and generation of embeddings is covered later in the chapter, in the Vectorizing text section. This architecture of BiLSTMs with embeddings is often a common starting point in NLP tasks. This will be covered and used in successive chapters in detail. This particular architecture for tokenization is considered the state of the art as of the time of writing this book. Prior to this, Hidden Markov Model (HMM)-based models were popular. [ 25 ] Essentials of NLP Depending on the languages in question, regular expression-based tokenization is also another approach. The NLTK library provides the Penn Treebank tokenizer based on regular expressions in a sed script. In future chapters, other tokenization or segmentation schemes such as Byte Pair Encoding (BPE) and WordPiece will be explained. The next task in text normalization is to understand the structure of a text through POS tagging. Part-of-speech tagging Languages have a grammatical structure. In most languages, words can be categorized primarily into verbs, adverbs, nouns, and adjectives. The objective of this part of the processing step is to take a piece of text and tag each word token with a POS identifier. Note that this makes sense only in the case of word-level tokens. Commonly, the Penn Treebank POS tagger is used by libraries including StanfordNLP to tag words. By convention, POS tags are added by using a code after the word, separated by a slash. As an example, NNS is the tag for a plural noun. If the words goats was encountered, it would be represented as goats/NNS. In the StandfordNLP library, Universal POS (UPOS) tags are used. The following tags are part of the UPOS tag set. More details on mapping of standard POS tags to UPOS tags can be seen at en-penn-uposf.html. The following is a table of the most common tags: Tag Class Examples ADJ Adjective: Usually describes a noun. Separate tags are used for comparatives and superlatives. Great, pretty ADP Adposition: Used to modify an object such as a noun, pronoun, or phrase; for example, "Walk up the stairs." Some languages like English use prepositions while others such as Hindi and Japanese use postpositions. Up, inside ADV Adverb: A word or phrase that modifies or qualifies an adjective, verb, or another adverb. Loudly, often AUX Auxiliary verb: Used in forming mood, voice, or tenses of other verbs. Will, can, may CCONJ Co-ordinating conjunction: Joins two phrases, clauses, or sentences. And, but, that INTJ Interjection: An exclamation, interruption, or sudden remark. Oh, uh, lol NOUN Noun: Identifies people, places, or things. Office, book NUM Numeral: Represents a quantity. Six, nine DET Determiner: Identifies a specific noun, usually as a singular. A, an, the [ 26 ] Chapter 1 PART Particle: Parts of speech outside of the main types. To, n't PRON Pronoun: Substitutes for other nouns, especially proper nouns. She, her PROPN Proper noun: A name for a specific person, place, or thing. Gandhi, US PUNCT Different punctuation symbols. ,?/ SCONJ Subordinating conjunction: Connects independent clause to a dependent clause. Because, while SYM Symbols including currency signs, emojis, and so on. $, #, % :) VERB Verb: Denotes action or occurrence. Go, do X Other: That which cannot be classified elsewhere. Etc, 4. (a numbered list bullet) The best way to understand how POS tagging works is to try it out: The code for this section is in the POS Based Features section of the notebook. en = snlp.Pipeline(lang='en') txt = "Yo you around? A friend of mine's lookin." pos = en(txt) The preceding code instantiates an English pipeline and processes a sample piece of text. The next piece of code is a reusable function to print back the sentence tokens with the POS tags: def print_pos(doc): text = "" for sentence in doc.sentences: for token in sentence.tokens: text += token.words[0].text + "/" + \ token.words[0].upos + " " text += "\n" return text This method can be used to investigate the tagging for the preceding example sentence: print(print_pos(pos)) [ 27 ] Essentials of NLP Yo/PRON you/PRON around/ADV ?/PUNCT A/DET friend/NOUN of/ADP mine/PRON 's/PART lookin/NOUN ./PUNCT Most of these tags would make sense, though there may be some inaccuracies. For example, the word lookin is miscategorized as a noun. Neither StanfordNLP, nor a model from another package, will be perfect. This is something that we have to account for in building models using such features. There are a couple of different features that can be built using these POS. First, we can update the word_counts() method to exclude the punctuation from the count of words. The current method is unaware of the punctuation when it counts the words. Additional features can be created that look at the proportion of different types of grammatical elements in the messages. Note that so far, all features are based on the structure of the text, and not on the content itself. Working with content features will be covered in more detail as this book continues. As a next step, let's update the word_counts() method and add a feature to show the percentages of symbols and punctuation in a message – with the hypothesis that maybe spam messages use more punctuation and symbols. Other features around types of different grammatical elements can also be built. These are left to you to implement. Our word_counts() method is updated as follows: en_sw = stopwords.stopwords('en') def word_counts_v3(x, pipeline=en): doc = pipeline(x) totals = 0. count = 0. non_word = 0. for sentence in doc.sentences: totals += len(sentence.tokens) # (1) for token in sentence.tokens: if token.text.lower() not in en_sw: if token.words[0].upos not in ['PUNCT', 'SYM']: count += 1. else: non_word += 1. non_word = non_word / totals return pd.Series([count, non_word], index=['Words_NoPunct', 'Punct']) [ 28 ] Chapter 1 This function is a little different compared to the previous one. Since there are multiple computations that need to be performed on the message in each row, these operations are combined and a Series object with column labels is returned. This can be merged with the main DataFrame like so: train_tmp = train['Message'].apply(word_counts_v3) train = pd.concat([train, train_tmp], axis=1) A similar process can be performed on the test set: test_tmp = test['Message'].apply(word_counts_v3) test = pd.concat([test, test_tmp], axis=1) A quick check of the statistics for spam and non-spam messages in the training set shows the following, first for non-spam messages: train.loc[train['Spam']==0].describe() Figure 1.12: Statistics for regular messages after using POS tags And then for spam messages: train.loc[train['Spam']==1].describe() [ 29 ] Essentials of NLP Figure 1.13: Statistics for spam messages after using POS tags In general, word counts have been reduced even further after stop word removal. Further more, the new Punct feature computes the ratio of punctuation tokens in a message relative to the total tokens. Now we can build a model with this data. Modeling data with POS tagging Plugging these features into the model, the following results are obtained: x_train = train[['Length', 'Punctuation', 'Capitals', 'Words_NoPunct', 'Punct']] y_train = train[['Spam']] x_test = test[['Length', 'Punctuation', 'Capitals' , 'Words_NoPunct', 'Punct']] y_test = test[['Spam']] model = make_model(input_dims=5) # model = make_model(input_dims=3) model.fit(x_train, y_train, epochs=10, batch_size=10) Train on 4459 samples Epoch 1/10 4459/4459 [==============================] - 1s 236us/sample - loss: 3.1958 - accuracy: 0.6028 Epoch 2/10 ... Epoch 10/10 [ 30 ] Chapter 1 4459/4459 [==============================] - 1s 139us/sample - loss: 0.1788 - accuracy: 0.9466 The accuracy shows a slight increase and is now up to 94.66%. Upon testing, it seems to hold: model.evaluate(x_test, y_test) 1115/1115 [==============================] - 0s 91us/sample - loss: 0.2076 - accuracy: 0.9426 [0.20764057086989485, 0.9426009] The final part of text normalization is stemming and lemmatization. Though we will not be building any features for the spam model using this, it can be quite useful in other cases. Stemming and lemmatization In certain languages, the same word can take a slightly different form depending on its usage. Consider the word depend itself. The following are all valid forms of the word depend: depends, depending, depended, dependent. Often, these variations are due to tenses. In some languages like Hindi, verbs may have different forms for different genders. Another case is derivatives of the same word such as sympathy, sympathetic, sympathize, and sympathizer. These variations can take different forms in other languages. In Russian, proper nouns take different forms based on usage. Suppose there is a document talking about London (Лондон). The phrase in London (в Лондоне) spells London differently than from London (из Лондона). These variations in the spelling of London can cause issues when matching some input to sections or words in a document. When processing and tokenizing text to construct a vocabulary of words appearing in the corpora, the ability to identify the root word can reduce the size of the vocabulary while expanding the accuracy of matches. In the preceding Russian example, any form of the word London can be matched to any other form if all the forms are normalized to a common representation post-tokenization. This process of normalization is called stemming or lemmatization. Stemming and lemmatization differ in their approach and sophistication but serve the same objective. Stemming is a simpler, heuristic rule-based approach that chops off the affixes of words. The most famous stemmer is called the Porter stemmer, published by Martin Porter in 1980. The official website is martin/PorterStemmer/, where various versions of the algorithm implemented in various languages are linked. [ 31 ] Essentials of NLP This stemmer only works for English and has rules including removing s at the end of the words for plurals, and removing endings such as -ed or -ing. Consider the following sentence: "Stemming is aimed at reducing vocabulary and aid understanding of morphological processes. This helps people understand the morphology of words and reduce size of corpus." After stemming using Porter's algorithm, this sentence will be reduced to the following: "Stem is aim at reduce vocabulari and aid understand of morpholog process . Thi help peopl understand the morpholog of word and reduc size of corpu ." Note how different forms of morphology, understand, and reduce are all tokenized to the same form. Lemmatization approaches this task in a more sophisticated manner, using vocabularies and morphological analysis of words. In the study of linguistics, a morpheme is a unit smaller than or equal to a word. When a morpheme is a word in itself, it is called a root or a free morpheme. Conversely, every word can be decomposed into one or more morphemes. The study of morphemes is called morphology. Using this morphological information, a word's root form can be returned post-tokenization. This base or dictionary form of the word is called a lemma, hence the process is called lemmatization. StanfordNLP includes lemmatization as part of processing. The Lemmatization section of the notebook has the code shown here. Here is a simple piece of code to take the preceding sentences and parse them: text = "Stemming is aimed at reducing vocabulary and aid understanding of morphological processes. This helps people understand the morphology of words and reduce size of corpus." lemma = en(text) After processing, we can iterate through the tokens to get the lemma of each word. This is shown in the following code fragment. The lemma of a word is exposed as the .lemma property of each word inside a token. For the sake of brevity of code, a simplifying assumption is made here that each token has only one word. [ 32 ] Chapter 1 The POS for each word is also printed out to help us understand how the process was performed. Some key words in the following output are highlighted: lemmas = "" for sentence in lemma.sentences: for token in sentence.tokens: lemmas += token.words[0].lemma +"/" + \ token.words[0].upos + " " lemmas += "\n" print(lemmas) stem/NOUN be/AUX aim/VERB at/SCONJ reduce/VERB vocabulary/NOUN and/ CCONJ aid/NOUN understanding/NOUN of/ADP morphological/ADJ process/NOUN ./PUNCT this/PRON help/VERB people/NOUN understand/VERB the/DET morphology/NOUN of/ADP word/NOUN and/CCONJ reduce/VERB size/NOUN of/ADP corpus/ADJ ./ PUNCT Compare this output to the output of the Porter stemmer earlier. One immediate thing to notice is that lemmas are actual words as opposed to fragments, as was the case with the Porter stemmer. In the case of reduce, the usage in both sentences is in the form of a verb, so the choice of lemma is consistent. Focus on the words understand and understanding in the preceding output. As the POS tag shows, it is used in two different forms. Consequently, it is not reduced to the same lemma. This is different from the Porter stemmer. The same behavior can be observed for morphology and morphological. This is a quite sophisticated behavior. Now that text normalization is completed, we can begin the vectorization of text. Vectorizing text While building models for the SMS message spam detection thus far, only aggregate features based on counts or distributions of lexical or grammatical features have been considered. The actual words in the messages have not been used thus far. There are a couple of challenges in using the text content of messages. The first is that text can be of arbitrary lengths. Comparing this to image data, we know that each image has a fixed width and height. Even if the corpus of images has a mixture of sizes, images can be resized to a common size with minimal loss of information by using a variety of compression mechanisms. In NLP, this is a bigger problem compared to computer vision. A common approach to handle this is to truncate the text. We will see various ways to handle variable-length texts in various examples throughout the book. [ 33 ] Essentials of NLP The second issue is that of the representation of words with a numerical quantity or feature. In computer vision, the smallest unit is a pixel. Each pixel has a set of numerical values indicating color or intensity. In a text, the smallest unit could be a word. Aggregating the Unicode values of the characters does not convey or embody the meaning of the word. In fact, these character codes embody no information at all about the character, such as its prevalence, whether it is a consonant or a vowel, and so on. However, averaging the pixels in a section of an image could be a reasonable approximation of that region of the image. It may represent how that region would look if seen from a large distance. A core problem then is to construct a numerical representation of words. Vectorization is the process of converting a word to a vector of numbers that embodies the information contained in the word. Depending on the vectorization technique, this vector may have additional properties that may allow comparison with other words, as will be shown in the Word vectors section later in this chapter. The simplest approach for vectorizing is to use counts of words. The second approach is more sophisticated, with its origins in information retrieval, and is called TF-IDF. The third approach is relatively new, having been published in 2013, and uses RNNs to generate embeddings or word vectors. This method is called Word2Vec. The newest method in this area as of the time of writing was BERT, which came out in the last quarter of 2018. The first three methods will be discussed in this chapter. BERT will be discussed in detail in Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding. Count-based vectorization The idea behind count-based vectorization is really simple. Each unique word appearing in the corpus is assigned a column in the vocabulary. Each document, which would correspond to individual messages in the spam example, is assigned a row. The counts of the words appearing in that document are entered in the relevant cell corresponding to the document and the word. With n unique documents containing m unique words, this results in a matrix of n rows by m columns. Consider a corpus like so: corpus = [ "I like fruits. Fruits like bananas", "I love bananas but eat an apple", "An apple a day keeps the doctor away" ] There are three documents in this corpus of text. The scikit-learn (sklearn) library provides methods for undertaking count-based vectorization. [ 34 ] Chapter 1 Modeling after count-based vectorization In Google Colab, this library should already be installed. If it is not installed in your Python environment, it can be installed via the notebook like so: !pip install sklearn The CountVectorizer class provides a built-in tokenizer that separates the tokens of two or more characters in length. This class takes a variety of options including a custom tokenizer, a stop word list, the option to convert characters to lowercase prior to tokenization, and a binary mode that converts every positive count to 1. The defaults provide a reasonable choice for an English language corpus: from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() X = vectorizer.fit_transform(corpus) vectorizer.get_feature_names() ['an', 'apple', 'away', 'bananas', 'but', 'day', 'doctor', 'eat', 'fruits', 'keeps', 'like', 'love', 'the'] In the preceding code, a model is fit to the corpus. The last line prints out the tokens that are used as columns. The full matrix can be seen as follows: X.toarray() array([[0, 0, 0, 1, 0, 0, 0, 0, 2, 0, 2, 0, 0], [1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0], [1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1]]) [ 35 ] Essentials of NLP This process has now converted a sentence such as "I like fruits. Fruits like bananas" into a vector (0, 0, 0, 1, 0, 0, 0, 2, 0, 2, 0, 0). This is an example of contextfree vectorization. Context-free refers to the fact that the order of the words in the document did not make any difference in the generation of the vector. This is merely counting the instances of the words in a document. Consequently, words with multiple meanings may be grouped into one, for example, bank. This may refer to a place near the river or a place to keep money. However, it does provide a method to compare documents and derive similarity. The cosine similarity or distance can be computed between two documents, to see which documents are similar to which other documents: from sklearn.metrics.pairwise import cosine_similarity cosine_similarity(X.toarray()) array([[1. , 0.13608276, 0. ], [0.13608276, 1. , 0.3086067 ], [0. , 0.3086067 , 1. ]]) This shows that the first sentence and the second sentence have a 0.136 similarity score (on a scale of 0 to 1). The first and third sentence have nothing in common. The second and third sentence have a similarity score of 0.308 – the highest in this set. Another use case of this technique is to check the similarity of the documents with given keywords. Let's say that the query is apple and bananas. This first step is to compute the vector of this query, and then compute the cosine similarity scores against the documents in the corpus: query = vectorizer.transform(["apple and bananas"]) cosine_similarity(X, query) array([[0.23570226], [0.57735027], [0.26726124]]) This shows that this query matches the second sentence in the corpus the best. The third sentence would rank second, and the first sentence would rank lowest. In a few lines, a basic search engine has been implemented, along with logic to serve queries! At scale, this is a very difficult problem, as the number of words or columns in a web crawler would top 3 billion. Every web page would be represented as a row, so that would also require billions of rows. Computing a cosine similarity in milliseconds to serve an online query and keeping the content of this matrix updated is a massive undertaking. [ 36 ] Chapter 1 The next step from this rather simple vectorization scheme is to consider the information content of each word in constructing this matrix. Term Frequency-Inverse Document Frequency (TF-IDF) In creating a vector representation of the document, only the presence of words was included – it does not factor in the importance of a word. If the corpus of documents being processed is about a set of recipes with fruits, then one may expect words like apples, raspberries, and washing to appear frequently. Term Frequency (TF) represents how often a word or token occurs in a given document. This is exactly what we did in the previous section. In a set of documents about fruits and cooking, a word like apple may not be terribly specific to help identify a recipe. However, a word like tuile may be uncommon in that context. Therefore, it may help to narrow the search for recipes much faster than a word like raspberry. On a side note, feel free to search the web for raspberry tuile recipes. If a word is rare, we want to give it a higher weight, as it may contain more information than a common word. A term can be upweighted by the inverse of the number of documents it appears in. Consequently, words that occur in a lot of documents will get a smaller score compared to terms that appear in fewer documents. This is called the Inverse Document Frequency (IDF). Mathematically, the score of each term in a document can be computed as follows: 𝑇𝑇𝑇𝑇 − 𝐼𝐼𝐼𝐼𝐼𝐼(𝑡𝑡𝑡 𝑡𝑡) = 𝑇𝑇𝑇𝑇(𝑡𝑡𝑡 𝑡𝑡) × 𝐼𝐼𝐼𝐼𝐼𝐼(𝑡𝑡) Here, t represents the word or term, and d represents a specific document. It is common to normalize the TF of a term in a document by the total number of tokens in that document. The IDF is defined as follows: 𝐼𝐼𝐼𝐼𝐼𝐼(𝑡𝑡) = log 𝑁𝑁 1 + 𝑛𝑛𝑡𝑡 Here, N represents the total number of documents in the corpus, and nt represents the number of documents where the term is present. The addition of 1 in the denominator avoids the divide-by-zero error. Fortunately, sklearn provides methods to compute TF-IDF. [ 37 ] Essentials of NLP The TF-IDF Vectorization section of the notebook contains the code for this section. Let's convert the counts from the previous section into their TF-IDF equivalents: import pandas as pd from sklearn.feature_extraction.text import TfidfTransformer transformer = TfidfTransformer(smooth_idf=False) tfidf = transformer.fit_transform(X.toarray()) pd.DataFrame(tfidf.toarray(), columns=vectorizer.get_feature_names()) This produces the following output: This should give some intuition on how TF-IDF is computed. Even with three toy sentences and a very limited vocabulary, many of the columns in each row are 0. This vectorization produces sparse representations. Now, this can be applied to the problem of detecting spam messages. Thus far, the features for each message have been computed based on some aggregate statistics and added to the pandas DataFrame. Now, the content of the message will be tokenized and converted into a set of columns. The TF-IDF score for each word or token will be computed for each message in the array. This is surprisingly easy to do with sklearn, as follows: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn. pre-processing import LabelEncoder tfidf = TfidfVectorizer(binary=True) X = tfidf.fit_transform(train['Message']).astype('float32') X_test = tfidf.transform(test['Message']).astype('float32') X.shape [ 38 ] Chapter 1 (4459, 7741) The second parameter shows that 7,741 tokens were uniquely identified. These are the columns of features that will be used in the model later. Note that the vectorizer was created with the binary flag. This implies that even if a token appears multiple times in a message, it is counted as one. The next line trains the TF-IDF model on the training dataset. Then, it converts the words in the test set according to the TF-IDF scores learned from the training set. Let's train a model on just these TF-IDF features. Modeling using TF-IDF features With these TF-IDF features, let's train a model and see how it does: _, cols = X.shape model2 = make_model(cols) # to match tf-idf dimensions y_train = train[['Spam']] y_test = test[['Spam']] model2.fit(X.toarray(), y_train, epochs=10, batch_size=10) Train on 4459 samples Epoch 1/10 4459/4459 [==============================] - 2s 380us/sample - loss: 0.3505 - accuracy: 0.8903 ... Epoch 10/10 4459/4459 [==============================] - 1s 323us/sample - loss: 0.0027 - accuracy: 1.0000 Whoa – we are able to classify every one correctly! In all honesty, the model is probably overfitting, so some regularization should be applied. The test set gives this result: model2.evaluate(X_test.toarray(), y_test) 1115/1115 [==============================] - 0s 134us/sample - loss: 0.0581 - accuracy: 0.9839 [0.05813191874545786, 0.9838565] [ 39 ] Essentials of NLP An accuracy rate of 98.39% is by far the best we have gotten in any model so far. Checking the confusion matrix, it is evident that this model is indeed doing very well: y_test_pred = model2.predict_classes(X_test.toarray()) tf.math.confusion_matrix(tf.constant(y_test.Spam), y_test_pred) Only 2 regular messages were classified as spam, while only 16 spam messages were classified as being not spam. This is indeed a very good model. Note that this dataset has Indonesian (or Bahasa) words as well as English words in it. Bahasa uses the Latin alphabet. This model, without using a lot of pretraining and knowledge of language, vocabulary, and grammar, was able to do a very reasonable job with the task at hand. However, this model ignores the relationships between words completely. It treats the words in a document as unordered items in a set. There are better models that vectorize the tokens in a way that preserves some of the relationships between the tokens. This is explored in the next section. Word vectors In the previous example, a row vector was used to represent a document. This was used as a feature for the classification model to predict spam labels. However, no information can be gleaned reliably from the relationships between words. In NLP, a lot of research has been focused on learning the words or representations in an unsupervised way. This is called representation learning. The output of this approach is a representation of a word in some vector space, and the word can be considered embedded in that space. Consequently, these word vectors are also called embeddings. The core hypothesis behind word vector algorithms is that words that occur near each other are related to each other. To see the intuition behind this, consider two words, bake and oven. Given a sentence fragment of five words, where one of these words is present, what would be the probability of the other being present as well? You would be right in guessing that the probability is likely quite high. Suppose now that words are being mapped into some two-dimensional space. In that space, these two words should be closer to each other, and probably further away from words like astronomy and tractor. [ 40 ] Chapter 1 The task of learning these embeddings for the words can be then thought of as adjusting words in a giant multidimensional space where similar words are closer to each other and dissimilar words are further apart from each other. A revolutionary approach to do this is called Word2Vec. This algorithm was published by Tomas Mikolov and collaborators from Google in 2013. This approach produces dense vectors of the order of 50-300 dimensions generally (though larger are known), where most of the values are non-zero. In contrast, in our previous trivial spam example, the TF-IDF model had 7,741 dimensions. The original paper had two algorithms proposed in it: continuous bag-of-words and continuous skipgram. On semantic tasks and overall, the performance of skip-gram was state of the art at the time of its publication. Consequently, the continuous skip-gram model with negative sampling has become synonymous with Word2Vec. The intuition behind this model is fairly straightforward. Consider this sentence fragment from a recipe: "Bake until the cookie is golden brown all over." Under the assumption that a word is related to the words that appear near it, a word from this fragment can be picked and a classifier can be trained to predict the words around it: Figure 1.14: A window of 5 centered on cookie Taking an example of a window of five words, the word in the center is used to predict two words before and two words after it. In the preceding figure, the fragment is until the cookie is golden, with the focus on the word cookie. Assuming that there are 10,000 words in the vocabulary, a network can be trained to predict binary decisions given a pair of words. The training objective is that the network predicts true for pairs like (cookie, golden) while predicting false for (cookie, kangaroo). This particular approach is called Skip-Gram Negative Sampling (SGNS) and it considerably reduces the training time required for large vocabularies. Very similar to the single-layer neural model in the previous section, a model can be trained with a one-to-many as the output layer. The sigmoid activation would be changed to a softmax function. If the hidden layer has 300 units, then its dimensions would be 10,000 x 300, that is, for each of the words, there will be a set of weights. The objective of the training is to learn these weights. In fact, these weights become the embedding for that word once training is complete. [ 41 ] Essentials of NLP The choice of units in the hidden layer is a hyperparameter that can be adapted for specific applications. 300 is commonly found as it is available through pretrained embeddings on the Google News dataset. Finally, the error is computed as the sum of the categorical cross-entropy of all the word pairs in negative and positive examples. The beauty of this model is that it does not require any supervised training data. Running sentences can be used to provide positive examples. For the model to learn effectively, it is important to provide negative samples as well. Words are randomly sampled using their probability of occurrence in the training corpus and fed as negative examples. To understand how the Word2Vec embeddings work, let's download a set of pretrained embeddings. The code shown in the following section can be found in the Word Vectors section of the notebook. Pretrained models using Word2Vec embeddings Since we are only interested in experimenting with a pretrained model, we can use the Gensim library and its pretrained embeddings. Gensim should already be installed in Google Colab. It can be installed like so: !pip install gensim After the requisite imports, pretrained embeddings can be downloaded and loaded. Note that these particular embeddings are approximately 1.6 GB in size, so may take a very long time to load (you may encounter some memory issues as well): from gensim.models.word2vec import Word2Vec import gensim.downloader as api model_w2v = api.load("word2vec-google-news-300") Another issue that you may run into is the Colab session expiring if left alone for too long while waiting for the download to finish. This may be a good time to switch to a local notebook, which will also be helpful in future chapters. Now, we are ready to inspect the similar words: model_w2v.most_similar("cookies",topn=10) [ 42 ] Chapter 1 [('cookie', 0.745154082775116), ('oatmeal_raisin_cookies', 0.6887780427932739), ('oatmeal_cookies', 0.662139892578125), ('cookie_dough_ice_cream', 0.6520504951477051), ('brownies', 0.6479344964027405), ('homemade_cookies', 0.6476464867591858), ('gingerbread_cookies', 0.6461867690086365), ('Cookies', 0.6341644525527954), ('cookies_cupcakes', 0.6275068521499634), ('cupcakes', 0.6258294582366943)] This is pretty good. Let's see how this model does at a word analogy task: model_w2v.doesnt_match(["USA","Canada","India","Tokyo"]) 'Tokyo' The model is able to guess that compared to the other words, which are all countries, Tokyo is the odd one out, as it is a city. Now, let's try a very famous example of mathematics on these word vectors: king = model_w2v['king'] man = model_w2v['man'] woman = model_w2v['woman'] queen = king - man + woman model_w2v.similar_by_vector(queen) [('king', 0.8449392318725586), ('queen', 0.7300517559051514), ('monarch', 0.6454660892486572), ('princess', 0.6156251430511475), ('crown_prince', 0.5818676948547363), ('prince', 0.5777117609977722), ('kings', 0.5613663792610168), ('sultan', 0.5376776456832886), ('Queen_Consort', 0.5344247817993164), ('queens', 0.5289887189865112)] Given that King was provided as an input to the equation, it is simple to filter the inputs from the outputs and Queen would be the top result. SMS spam classification could be attempted using these embeddings. However, future chapters will cover the use of GloVe embeddings and BERT embeddings for sentiment analysis. [ 43 ] Essentials of NLP A pretrained model like the preceding can be used to vectorize a document. Using these embeddings, models can be trained for specific purposes. In later chapters, newer methods of generating contextual embeddings, such as BERT, will be discussed in detail. Summary In this chapter, we worked through the basics of NLP, including collecting and labeling training data, tokenization, stop word removal, case normalization, POS tagging, stemming, and lemmatization. Some vagaries of these in languages such as Japanese and Russian were also covered. Using a variety of features derived from these approaches, we trained a model to classify spam messages, where the messages had a combination of English and Bahasa Indonesian words. This got us to a model with 94% accuracy. However, the major challenge in using the content of the messages was in defining a way to represent words as vectors such that computations could be performed on them. We started with a simple count-based vectorization scheme and then graduated to a more sophisticated TF-IDF approach, both of which produced sparse vectors. This TF-IDF approach gave a model with 98%+ accuracy in the spam detection task. Finally, we saw a contemporary method of generating dense word embeddings, called Word2Vec. This method, though a few years old, is still very relevant in many production applications. Once the word embeddings are generated, they can be cached for inference and that makes an ML model using these embeddings run with relatively low latency. We used a very basic deep learning model for solving the SMS spam classification task. Like how Convolutional Neural Networks (CNNs) are the predominant architecture in computer vision, Recurrent Neural Networks (RNNs), especially those based on Long Short-Term Memory (LSTM) and Bi-directional LSTMs (BiLSTMs), are most commonly used to build NLP models. In the next chapter, we cover the structure of LSTMs and build a sentiment analysis model using BiLSTMs. These models will be used extensively in creative ways to solve different NLP problems in future chapters. [ 44 ] 2 Understanding Sentiment in Natural Language with BiLSTMs Natural Language Understanding (NLU) is a significant subfield of Natural Language Processing (NLP). In the last decade, there has been a resurgence of interest in this field with the dramatic success of chatbots such as Amazon's Alexa and Apple's Siri. This chapter will introduce the broad area of NLU and its main applications. Specific model architectures called Recurrent Neural Networks (RNNs), with special units called Long Short-Term Memory (LSTM) units, have been developed to make the task of understanding natural language easier. LSTMs in NLP are analogous to convolution blocks in computer vision. We will take two examples to build models that can understand natural language. Our first example is understanding the sentiment of movie reviews. This will be the focus of this chapter. The other example is one of the fundamental building blocks of NLU, Named Entity Recognition (NER). That will be the main focus of the next chapter. Building models capable of understanding sentiments requires the use of BiDirectional LSTMs (BiLSTMs) in addition to the use of techniques from Chapter 1, Essentials of NLP. Specifically, the following will be covered in this chapter: • Overview of NLU and its applications • Overview of RNNs and BiRNNS using LSTMs and BiLSTMS [ 45 ] Understanding Sentiment in Natural Language with BiLSTMs • Analyzing the sentiment of movie reviews with LSTMs and BiLSTMs • Using tf.data and the TensorFlow Datasets package to manage the loading of data • Optimizing the performance of data loading for effective utilization of the CPU and GPU We will start with a quick overview of NLU and then get right into BiLSTMs. Natural language understanding NLU enables the processing of unstructured text and extracts meaning and critical pieces of information that are actionable. Enabling a computer to understand sentences of text is a very hard challenge. One aspect of NLU is understanding the meaning of sentences. Sentiment analysis of a sentence becomes possible after understanding the sentence. Another useful application is the classification of sentences to a topic. This topic classification can also help in the disambiguation of entities. Consider the following sentence: "A CNN helps improve the accuracy of object recognition." Without understanding that this sentence is about machine learning, an incorrect inference may be made about the entity CNN. It may be interpreted as the news organization as opposed to a deep learning architecture used in computer vision. An example of a sentiment analysis model is built using a specific RNN architecture called BiLSTMs later in this chapter. Another aspect of NLU is to extract information or commands from free-form text. This text can be sourced from converting speech, as spoken to Amazon's Echo device, for example, into text. Rapid advances in speech recognition now allow considering speech as equivalent to text. Extracting commands from the text, like an object and an action to perform, allows control of devices through voice commands. Consider the example sentence "Lower the volume." Here, the object is "volume" and the action is "lower." After extraction from text, these actions can be matched to a list of available actions and executed. This capability enables advanced human-computer interaction (HCI), allowing control of home appliances through voice commands. NER is used for detecting key tokens in sentences. [ 46 ] Chapter 2 This technique is incredibly useful in building form filling or slot filling chatbots. NER also forms the basis of other NLU techniques that perform tasks such as relation extraction. Consider the sentence "Sundar Pichai is the CEO of Google." In this sentence, what is the relationship between the entities "Sundar Pichai" and "Google"? The right answer is CEO. This is an example of relation extraction, and NER was used to identify the entities in the sentence. The focus of the next chapter is on NER using a specific architecture that has been quite effective in this space. A common building block of both sentiment analysis and NER models is Bi-directional RNN models. The next section describes BiLSTMs, which is Bidirectional RNN using LSTM units, prior to building a sentiment analysis model with it. Bi-directional LSTMs – BiLSTMs LSTMs are one of the styles of recurrent neural networks, or RNNs. RNNs are built to handle sequences and learn the structure of them. An RNN does that by using the output generated after processing the previous item in the sequence along with the current item to generate the next output. Mathematically, this can be expressed like so: 𝑓𝑓𝑡𝑡 (𝑥𝑥𝑡𝑡 ) = 𝑓𝑓(𝑓𝑓{𝑡𝑡𝑡𝑡} (𝑥𝑥𝑡𝑡𝑡𝑡 , 𝑥𝑥𝑡𝑡 ; 𝜃𝜃𝜃) This equation says that to compute the output at time t, the output at t-1 is used as an input along with the input data xt at the same time step. Along with this, a set of parameters or learned weights, represented by 𝜃𝜃, are also used in computing the output. The objective of training an RNN is to learn these weights 𝜃𝜃𝜃 This particular formulation of an RNN is unique. In previous examples, we have not used the output of a batch to determine the output of a future batch. While we focus on applications of RNNs on language where a sentence is modeled as a sequence of words appearing one after the other, RNNs can be applied to build general timeseries models. [ 47 ] Understanding Sentiment in Natural Language with BiLSTMs RNN building blocks The previous section outlined the basic mathematical intuition of a recursive function that is a simplification of the RNN building block. Figure 2.1 represents a few time steps and also adds details to show different weights used for computation for a basic RNN building block or cell. Figure 2.1: RNN unraveled The basic cell is shown on the left. The input vector at a specific time or sequence step t is multiplied by a weight vector, represented in the diagram as U, to generate an activation in the middle part. The key part of this architecture is the loop in this activation part. The output of a previous step is multiplied by a weight vector, denoted by V in the figure, and added to the activation. This activation can be multiplied by another weight vector, represented by W, to produce the output of that step shown at the top. In terms of sequence or time steps, this network can be unrolled. This unrolling is virtual. However, it is represented on the right side of the figure. Mathematically, activation at time step t can be represented by: 𝑎𝑎𝑡𝑡 = 𝑈𝑈𝑈 𝑈𝑈𝑡𝑡 + 𝑉𝑉𝑉𝑉𝑉𝑡𝑡𝑡𝑡 Output at the same step can be computed like so: 𝑜𝑜𝑡𝑡 = 𝑊𝑊𝑊 𝑊𝑊𝑡𝑡 The mathematics of RNNs has been simplified to provide intuition about RNNs. [ 48 ] Chapter 2 Structurally, the network is very simple as it is a single unit. To exploit and learn the structure of inputs passing through, weight vectors U, V, and W are shared across time steps. The network does not have layers as seen in fully connected or convolutional networks. However, as it is unrolled over time steps, it can be thought of as having as many layers as steps in the input sequences. There are additional criteria that would need to be satisfied to make a Deep RNN. More on that later in this section. These networks are trained using backpropagation and stochastic gradient descent techniques. The key thing to note here is that backpropagation is happening through the sequence or time steps before backpropogating through layers. Having this structure enables processing sequences of arbitrary lengths. However, as the length of sequences increases, there are a couple of challenges that emerge: • Vanishing and exploding gradients: As the lengths of these sequences increase, the gradients going back will become smaller and smaller. This will cause the network to train slowly or not learn at all. This effect will be more pronounced as sequence lengths increase. In the previous chapter, we built a network of a handful of layers. Here, a sentence of 10 words would equate to a network of 10 layers. A 1-minute audio sample of 10 ms would generate 6,000 steps! Conversely, gradients can also explode if the output is increasing. The simplest way to manage vanishing gradients is through the use of ReLUs. For managing exploding gradients, a technique called gradient clipping is used. This technique artificially clips gradients if their magnitude exceeds a threshold. This prevents gradients from becoming too large or exploding. • Inability to manage long-term dependencies: Let's say that the third word in an eleven-word sentence is highly informative. Here is a toy example: "I think soccer is the most popular game across the world." As the processing reaches the end of the sentence, the contribution of the words prior earlier in the sequence will become smaller and smaller due to repeated multiplication with the vector V as shown above. • Two specific RNN cell designs mitigate these problems: Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU). These are described next. However, note that TensorFlow provides implementations of both types of cells out of the box. So, building RNNs with these cell types is almost trivial. [ 49 ] Understanding Sentiment in Natural Language with BiLSTMs Long short-term memory (LSTM) networks LSTM networks were proposed in 1997 and improved upon and popularized by many researchers. They are widely used today for a variety of tasks and produce amazing results. LSTM has four main parts: • Cell value or memory of the network, also referred to as the cell, which stores accumulated knowledge • Input gate, which controls how much of the input is used in computing the new cell value • Output gate, which determines how much of the cell value is used in the output • Forget gate, which determines how much of the current cell value is used for updating the cell value These are shown in the figure below: Figure 2.2: LSTM cell (Source: Madsen, "Visualizing memorization in RNNs," Distill, 2019) Training RNNs is a very complicated process fraught with many frustrations. Modern tools such as TensorFlow do a great job of managing the complexity and reducing the pain to a great extent. However, training RNNs still is a challenging task, especially without GPU support. But the rewards of getting it right are well worth it, especially in the field of NLP. [ 50 ] Chapter 2 After a quick introduction to GRUs, we will pick up on LSTMs, talk about BiLSTMs, and build a sentiment classification model. Gated recurrent units (GRUs) GRUs are another popular, and more recent, type of RNN unit. They were invented in 2014. They are simpler than LSTMs: Figure 2.3: Gated recurrent unit (GRU) architecture Compared to the LSTM, it has fewer gates. Input and forget gates are combined into a single update gate. Some of the internal cell state and hidden state is merged together as well. This reduction in complexity makes it easier to train. It has shown great results in the speech and sound domains. However, in neural machine translation tasks, LSTMs have shown superior performance. In this chapter, we will focus on using LSTMs. Before we discuss BiLSTMs, let's take a sentiment classification problem and solve it with LSTMs. Then, we will try and improve the model with BiLSTMs. Sentiment classification with LSTMs Sentiment classification is an oft-cited use case of NLP. Models that predict the movement of stock prices by using sentiment analysis features from tweets have shown promising results. Tweet sentiment is also used to determine customers' perceptions of brands. Another use case is processing user reviews for movies, or products on e-commerce or other websites. To see LSTMs in action, let's use a dataset of movie reviews from IMDb. This dataset was published at the ACL 2011 conference in a paper titled Learning Word Vectors for Sentiment Analysis. This dataset has 25,000 review samples in the training set and another 25,000 in the test set. [ 51 ] Understanding Sentiment in Natural Language with BiLSTMs A local notebook will be used for the code for this example. Chapter 10, Installation and Setup Instructions for Code, provides detailed instructions on how to set up the development environment. In short, you will need Python 3.7.5 and the following libraries to start: • pandas 1.0.1 • NumPy 1.18.1 • TensorFlow 2.4 and the tensorflow_datasets 3.2.1 package • Jupyter notebook We will follow the overall process outlined in Chapter 1, Essentials of NLP. We start by loading the data we need. Loading the data In the previous chapter, we downloaded the data and loaded it with the pandas library. This approach loaded the entire dataset into memory. However, sometimes data can be quite large, or spread into multiple files. In such cases, it may be too large for loading and need lots of pre-processing. Making text data ready to be used in a model requires normalization and vectorization at the very least. Often, this needs to be done outside of the TensorFlow graph using Python functions. This may cause issues in the reproducibility of code. Further, it creates issues for data pipelines in production where there is a higher chance of breakage as different dependent stages are being executed separately. TensorFlow provides a solution for the loading, transformation, and batching of data through the use of the tf.data package. In addition, a number of datasets are provided for download through the tensorflow_datasets package. We will use a combination of these to download the IMDb data, and perform the tokenization, encoding, and vectorization steps before training an LSTM model. All the code for the sentiment review example can be found in the GitHub repo under the chapter2-nlu-sentiment-analysisbilstm folder. The code is in an IPython notebook called IMDB Sentiment analysis.ipynb. The first step is to install the appropriate packages and download the datasets: !pip install tensorflow_datasets import tensorflow as tf import tensorflow_datasets as tfds import numpy as np [ 52 ] Chapter 2 The tfds package comes with a number of datasets in different domains such as images, audio, video, text, summarization, and so on. To see the datasets available: ", ".join(tfds.list_builders()) 'abstract_reasoning, aeslc, aflw2k3d, amazon_us_reviews, arc, bair_robot_pushing_small, beans, big_patent, bigearthnet, billsum, binarized_mnist, binary_alpha_digits, c4, caltech101, caltech_ birds2010, caltech_birds2011, cars196, cassava, cats_vs_dogs, celeb_a, celeb_a_hq, cfq, chexpert, cifar10, cifar100, cifar10_1, cifar10_ corrupted, citrus_leaves, cityscapes, civil_comments, clevr, cmaterdb, cnn_dailymail, coco, coil100, colorectal_histology, colorectal_ histology_large, cos_e, curated_breast_imaging_ddsm, cycle_gan, deep_ weeds, definite_pronoun_resolution, diabetic_retinopathy_detection, div2k, dmlab, downsampled_imagenet, dsprites, dtd, duke_ultrasound, dummy_dataset_shared_generator, dummy_mnist, emnist, eraser_multi_ rc, esnli, eurosat, fashion_mnist, flic, flores, food101, gap, gigaword, glue, groove, higgs, horses_or_humans, i_naturalist2017, image_label_folder, imagenet2012, imagenet2012_corrupted, imagenet_ resized, imagenette, imagewang, imdb_reviews, iris, kitti, kmnist, lfw, librispeech, librispeech_lm, libritts, lm1b, lost_and_found, lsun, malaria, math_dataset, mnist, mnist_corrupted, movie_rationales, moving_mnist, multi_news, multi_nli, multi_nli_mismatch, natural_ questions, newsroom, nsynth, omniglot, open_images_v4, opinosis, oxford_flowers102, oxford_iiit_pet, para_crawl, patch_camelyon, pet_finder, places365_small, plant_leaves, plant_village, plantae_k, qa4mre, quickdraw_bitmap, reddit_tifu, resisc45, rock_paper_scissors, rock_you, scan, scene_parse150, scicite, scientific_papers, shapes3d, smallnorb, snli, so2sat, speech_commands, squad, stanford_dogs, stanford_online_products, starcraft_video, sun397, super_glue, svhn_ cropped, ted_hrlr_translate, ted_multi_translate, tf_flowers, the300w_ lp, tiny_shakespeare, titanic, trivia_qa, uc_merced, ucf101, vgg_face2, visual_domain_decathlon, voc, wider_face, wikihow, wikipedia, wmt14_ translate, wmt15_translate, wmt16_translate, wmt17_translate, wmt18_ translate, wmt19_translate, wmt_t2t_translate, wmt_translate, xnli, xsum, yelp_polarity_reviews' That is a list of 155 datasets. Details of the datasets can be obtained on the catalog page at. IMDb data is provided in three splits – training, test, and unsupervised. The training and testing splits have 25,000 rows each, with two columns. The first column is the text of the review, and the second is the label. "0" represents a review with negative sentiment while "1" represents a review with positive sentiment. The following code loads the training and testing data splits: imdb_train, ds_info = tfds.load(name="imdb_reviews", split="train", [ 53 ] Understanding Sentiment in Natural Language with BiLSTMs with_info=True, as_supervised=True) imdb_test = tfds.load(name="imdb_reviews", split="test", as_supervised=True) Note that this command may take a little bit of time to execute as data is downloaded. ds_info contains information about the dataset. This is returned when the with_info parameter is supplied. Let's see the information contained in ds_info: print(ds_info)="""}, [ 54 ] Chapter 2 url = {} }""", redistribution_info=, ) We can see that two keys, text and label, are available in the supervised mode. Using the as_supervised parameter is key to loading the dataset as a tuple of values. If this parameter is not specified, data is loaded and made available as dictionary keys. In cases where the data has multiple inputs, that may be preferable. To get a sense of the data that has been loaded: for example, label in imdb_train.take(1): print(example, '\n', label)) tf.Tensor(0, shape=(), dtype=int64) The above review is an example of a negative review. The next step is tokenization and vectorization of the reviews. Normalization and vectorization In Chapter 1, Essentials of NLP, we discussed a number of different normalization methods. Here, we are only going to tokenize the text into words and construct a vocabulary, and then encode the words using this vocabulary. This is a simplified approach. There can be a number of different approaches that can be used for building additional features. Using techniques discussed in the first chapter, such as POS tagging, a number of features can be built, but that is left as an exercise for the reader. In this example, our aim is to use the same set of features on an RNN with LSTMs followed by using the same set of features on an improved model with BiLSTMs. A vocabulary of the tokens occurring in the data needs to be constructed prior to vectorization. Tokenization breaks up the words in the text into individual tokens. The set of all the tokens forms the vocabulary. [ 55 ] Understanding Sentiment in Natural Language with BiLSTMs Normalization of the text, such as converting to lowercase, etc., is performed along with this tokenization step. tfds comes with a set of feature builders for text in the tfds.features.text package. First, a set of all the words in the training data needs to be created: tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() MAX_TOKENS = 0 for example, label in imdb_train: some_tokens = tokenizer.tokenize(example.numpy()) if MAX_TOKENS < len(some_tokens): MAX_TOKENS = len(some_tokens) vocabulary_set.update(some_tokens) By iterating through the training examples, each review is tokenized and the words in the review are added to a set. These are added to a set to get unique words. Note that tokens or words have not been converted to lowercase. This means that the size of the vocabulary is going to be slightly larger. Using this vocabulary, an encoder can be created. TokenTextEncoder is one of three out-of-the-box encoders that are provided in tfds. Note how the list of tokens is converted into a set to ensure only unique tokens are retained in the vocabulary. The tokenizer used for generating the vocabulary is passed in, so that every successive call to encode a string can use the same tokenization scheme. This encoder expects that the tokenizer object provides a tokenize() and a join() method. If you want to use StanfordNLP or some other tokenizer as discussed in the previous chapter, all you need to do is to wrap the StanfordNLP interface in a custom object and implement methods to split the text into tokens and join the tokens back into a string: imdb_encoder = tfds.features.text.TokenTextEncoder(vocabulary_set, tokenizer=tokenizer) vocab_size = imdb_encoder.vocab_size print(vocab_size, MAX_TOKENS) 93931 2525 The vocabulary has 93,931 tokens. The longest review has 2,525 tokens. That is one wordy review! Reviews are going to have different lengths. LSTMs expect sequences of equal length. Padding and truncating operations make reviews of equal length. Before we do that, let's test whether the encoder works correctly: [ 56 ] Chapter 2 for example, label in imdb_train.take(1): print(example) encoded = imdb_encoder.encode(example.numpy()) print(imdb_encoder.decode(encoded)) that punctuation is removed from these reviews when they are reconstructed from the encoded representations. One convenience feature provided by the encoder is persisting the vocabulary to disk. This enables a one-time computation of the vocabulary and distribution for production use cases. Even during development, computation of the vocabulary can be a resource intensive task prior to each run or restart of the notebook. Saving the vocabulary and the encoder to disk enables picking up coding and model building from anywhere after the vocabulary building step is complete. To save the encoder, use the following: imdb_encoder.save_to_file("reviews_vocab") [ 57 ] Understanding Sentiment in Natural Language with BiLSTMs To load the encoder from the file and test it, the following commands can be used: enc = tfds.features.text.TokenTextEncoder.load_from_ file("reviews_vocab") enc.decode(enc.encode("Good case. Excellent value.")) 'Good case Excellent value' Tokenization and encoding were done for a small set of rows at a time. TensorFlow provides mechanisms to perform these actions in bulk over large datasets, which can be shuffled and loaded in batches. This allows very large datasets to be loaded without running out of memory during training. To enable this, a function needs to be defined that performs a transformation on a row of data. Note that multiple transformations can be chained one after the other. It is also possible to use a Python function in defining these transformations. For processing the review above, the following steps need to be performed: • Tokenization: Reviews need to be tokenized into words. • Encoding: These words need to be mapped to integers using the vocabulary. • Padding: Reviews can have variable lengths, but LSTMs expect vectors of the same length. So, a constant length is chosen. Reviews shorter than this length are padded with a specific vocabulary index, usually 0 in TensorFlow. Reviews longer than this length are truncated. Fortunately, TensorFlow provides such a function out of the box. The following functions perform this:]) [ 58 ] Chapter 2 label.set_shape([]) return encoded, label encode_tf_fn is called by the dataset API with one example at a time. This means a tuple of the review and its label. This function in turn calls another function, encode_pad_transform, which is wrapped in the tf.py_function call that performs the actual transformation. In this function, tokenization is performed first, followed by encoding, and finally padding and truncating. A maximum length of 150 tokens or words is chosen for padding/truncating sequences. Any Python logic can be used in this second function. For example, the StanfordNLP package could be used to perform POS tagging of the words, or stopwords could be removed as shown in the previous chapter. Here, we try to keep things simple for this example. Padding is an important step as different layers in TensorFlow cannot handle tensors of different widths. Tensors of different widths are called ragged tensors. There is ongoing work to incorporate support for ragged tensors and the support is improving. However, the support for ragged tensors is not universal in TensorFlow. Consequently, ragged tensors are avoided in this text. Transforming the data is quite trivial. Let's try the code on a small sample of the data: subset = imdb_train.take(10) tst = subset.map(encode_tf_fn) for review, label in tst.take(1): print(review, label) print(imdb_encoder.decode(review)) tf.Tensor( [40205 9679 80722 81643 29176 2673 44509 18966 82970 1902 2754 91375 41135 71762 84093 76562 47559 59655 6569 13077 51728 91747 21013 7623 6550 40338 18966 36012 64846 14002 73549 52960 40359 49248 62585 75017 67425 18181 87701 56336 29928 64846 41917 49779 87701 62585 58974 18181 7623 2615 7927 67321 40205 7623 43621 51728 29392 58948 76770 15030 74878 86231 49390 69836 18353 49390 48352 87701 62200 13462 80285 76037 75121 40768 86201 28257 76220 87157 29176 [ 59 ] 1766 9679 65053 67425 Understanding Sentiment in Natural Language with BiLSTMs 93397 74878 67053 61304 64846 93397 7623 18560 9679 50741 44024 79648 7470 28203 13192 47453 6386 18560 79892 49248 7158 91321 18181 88633 13929 2615 91321 81643 29176 2615 65285 63778 13192 82970 28143 14618 44449 39028 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0], shape=(150,), dtype=int64) tf.Tensor(0, shape=(), dtype=int64) the "0" at the end of the encoded tensor in the first part of the output. That is a consequence of padding to 150 words. Running this map over the entire dataset can be done like so: encoded_train = imdb_train.map(encode_tf_fn) encoded_test = imdb_test.map(encode_tf_fn) This should execute really fast. When the training loop executes, the mapping will be executed at that time. Other commands that are available and useful in the tf.data. DataSet class, of which imdb_train and imdb_test are instances, are filter(), shuffle(), and batch(). filter() can remove certain types of data from the dataset. It can be used to filter out reviews above or below a certain length, or separate out positive and negative examples to construct a more balanced dataset. The second method shuffles the data between training epochs. The last one batches data for training. Note that different datasets will result if these methods are applied in a different sequence. [ 60 ] Chapter 2 Performance optimization with tf.data: Figure 2.4: Illustrative example of the time taken by sequential execution of the map function (Source: Better Performance with the tf.data API at tensorflow.org/guide/data_ performance ) As can be seen in the figure above, a number of operations contribute to the overall training time in an epoch. This example chart above shows the case where files need to be opened, as shown in the topmost row, data needs to be read in the row below, a map transformation needs to be executed on the data being read, and then training can happen. Since these steps are happening in sequence, it can make the overall training time longer. Instead, the mapping step can happen in parallel. This will result in shorter execution times overall. CPU power is used to prefetch, batch, and transform the data, while the GPU is used for training computation and operations such as gradient calculation and updating weights. This can be enabled by making a small change in the call to the map function above: encoded_train = imdb_train.map(encode_tf_fn, num_parallel_calls=tf.data.experimental. AUTOTUNE) encoded_test = imdb_test.map(encode_tf_fn, num_parallel_calls=tf.data.experimental. AUTOTUNE) Passing the additional parameter enables TensorFlow to use multiple subprocesses to execute the transformation on. [ 61 ] Understanding Sentiment in Natural Language with BiLSTMs This can result in a speedup as shown below: Figure 2.5: Illustrative example of a reduction in training time due to parallelization of map (Source: Better Performance with the tf.data API at tensorflow.org/guide/ data_performance ) While we have normalized and encoded the text of the reviews, we have not converted it into word vectors or embeddings. This step is performed along with the model training in the next step. So, we are ready to start building a basic RNN model using LSTM now. LSTM model with embeddings TensorFlow and Keras make it trivial to instantiate an LSTM-based model. In fact, adding a layer of LSTMs is one line of code. The simplest form is shown below: tf.keras.layers.LSTM(rnn_units) Here, the rnn_units parameter determines how many LSTMs are strung together in one layer. There are a number of other parameters that can be configured, but the defaults are fairly reasonable on them. The TensorFlow documentation details these options and possible values with examples quite well. However, the review text tokens cannot be fed as is into the LSTM layer. They need to be vectorized using an embedding scheme. There are a couple of different approaches that can be used. The first approach is to learn these embeddings as the model trains. This is the approach we're going to use, as it is the simplest approach. In cases where the text data you may have is unique to a domain, like medical transcriptions, this is also probably the best approach. This approach, however, requires significant amounts of data for training for the embeddings to learn the right relationships with the words. The second approach is to use pre-trained embeddings, like Word2vec or GloVe, as shown in the previous chapter, and use them to vectorize the text. This approach has really worked well in general-purpose text models and can even be adapted to work very well in specific domains. Working with transfer learning is the focus of Chapter 4, Transfer Learning with BERT, though. [ 62 ] Chapter 2 Coming back to learning embeddings, TensorFlow provides an embedding layer that can be added before the LSTM layer. Again, this layer has several options that are well documented. To complete the binary classification model, all that remains is a final dense layer with one unit for classification. A utility function that can build models with some configurable parameters can be configured like so: def build_model_lstm(vocab_size, embedding_dim, rnn_units, batch_size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None]), tf.keras.layers.LSTM(rnn_units), tf.keras.layers.Dense(1, activation='sigmoid') ]) return model This function exposes a number of configurable parameters to allow trying out different architectures. In addition to these parameters, batch size is another important parameter. These can be configured as follows: vocab_size = imdb_encoder.vocab_size # The embedding dimension embedding_dim = 64 # Number of RNN units rnn_units = 64 # batch size BATCH_SIZE=100 With the exception of the vocabulary size, all other parameters can be changed around to see the impact on model performance. With these configurations set, the model can be constructed: model = build_model_lstm( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) model.summary() [ 63 ] Understanding Sentiment in Natural Language with BiLSTMs Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_3 (Embedding) (100, None, 64) 6011584 _________________________________________________________________ lstm_3 (LSTM) (100, 64) 33024 _________________________________________________________________ dense_5 (Dense) (100, 1) 65 ================================================================= Total params: 6,044,673 Trainable params: 6,044,673 Non-trainable params: 0 _________________________________________________________________ Such a small model has over 6 million trainable parameters. It is easy to check the size of the embedding layer. The total number of tokens in the vocabulary was 93,931. Each token is represented by a 64-dimensional embedding, which provides 93,931 X 64 = 6,011,584 million parameters. This model is now ready to be compiled with the specification of the loss function, optimizer, and evaluation metrics. In this case, since there are only two labels, binary cross-entropy is used as the loss. The Adam optimizer is a very good choice with great defaults. Since we are doing binary classification, accuracy, precision, and recall are the metrics we would like to track during training. Then, the dataset needs to be batched and training can be started: model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'Precision', 'Recall']) encoded_train_batched = encoded_train.batch(BATCH_SIZE) model.fit(encoded_train_batched, epochs=10) Epoch 1/10 250/250 [==============================] - accuracy: 0.7920 - Precision: 0.7677 Epoch 2/10 250/250 [==============================] - accuracy: 0.9353 - Precision: 0.9355 … Epoch 10/10 250/250 [==============================] - accuracy: 0.9986 - Precision: 0.9986 [ 64 ] - 23s 93ms/step - loss: 0.4311 Recall: 0.8376 - 21s 83ms/step - loss: 0.1768 Recall: 0.9351 - 21s 85ms/step - loss: 0.0066 Recall: 0.9985 Chapter 2 That is a very good result! Let's compare it to the test set: model.evaluate(encoded_test.batch(BATCH_SIZE)) 250/Unknown - 20s 80ms/step - loss: 0.8682 - accuracy: 0.8063 Precision: 0.7488 - Recall: 0.9219 The difference between the performance on the training and test set implies there is overfitting happening in the model. One way to manage overfitting is to introduce a dropout layer after the LSTM layer. This is left as an exercise to you. The model above was trained using an NVIDIA RTX 2070 GPU. You may see longer times per epoch when training using a CPU only. Now, let's see how BiLSTMs would perform on this task. BiLSTM model Building BiLSTMs is easy in TensorFlow. All that is required is a one-line change in the model definition. In the build_model_lstm() function, the line that adds the LSTM layer needs to be modified. The new function would look like this, with the modified line highlighted: def build_model_bilstm(vocab_size, embedding_dim, rnn_units, batch_ size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None]), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(rnn_units)), tf.keras.layers.Dense(1, activation='sigmoid') ]) return model [ 65 ] Understanding Sentiment in Natural Language with BiLSTMs But first, let's understand what a BiLSTM is: Figure 2.6: LSTMs versus BiLSTMs In a regular LSTM network, tokens or words are fed in one direction. As an example, take the review "This movie was really good." Each token starting from the left is fed into the LSTM unit, marked as a hidden unit, one at a time. The diagram above shows a version unrolled in time. What this means is that each successive word is considered as occurring at a time increment from the previous word. Each step produces an output that may or may not be useful. That is dependent on the problem at hand. In the IMDb sentiment prediction case, only the final output is important as it is fed to the dense layer to make a decision on whether the review was positive or negative. If you are working with right-to-left languages such as Arabic and Hebrew, please feed the tokens right to left. It is important to understand the direction the next word or token comes from. If you are using a BiLSTM, then the direction may not matter as much. Due to this time unrolling, it may appear as if there are multiple hidden units. However, it is the same LSTM unit, as shown in Figure 2.2 earlier in the chapter. The output of the unit is fed back into the same unit at the next time step. In the case of BiLSTM, there is a pair of hidden units. One set operates on the tokens from left to right, while the other set operates on the tokens from right to left. In other words, a forward LSTM model can only learn from tokens from the past time steps. A BiLSTM model can learn from tokens from the past and the future. [ 66 ] Chapter 2 This method allows the capturing of more dependencies between words and the structure of the sentence and improves the accuracy of the model. Suppose the task is to predict the next word in this sentence fragment: I jumped into the … There are many possible completions to this sentence. Further, suppose that you had access to the words after the sentence. Think about these three possibilities: 1. I jumped into the …. with only a small blade 2. I jumped into the … and swam to the other shore 3. I jumped into the … from the 10m diving board Battle or fight would be likely words in the first example, river for the second, and swimming pool for the last one. In each case, the beginning of the sentence was exactly the same but the words from the end helped disambiguate which word should fill in the blank. This illustrates the difference between LSTMs and BiLSTMs. An LSTM can only learn from the past tokens, while the BiLSTM can learn from both past and future tokens. This new BiLSTM model has a little over 12M parameters. bilstm = build_model_bilstm( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) bilstm.summary() Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (50, None, 128) 12023168 _________________________________________________________________ dropout (Dropout) (50, None, 128) 0 _________________________________________________________________ bidirectional (Bidirectional (50, None, 128) 98816 _________________________________________________________________ dropout_1 (Dropout) (50, None, 128) 0 _________________________________________________________________ bidirectional_1 (Bidirection (50, 128) 98816 _________________________________________________________________ [ 67 ] Understanding Sentiment in Natural Language with BiLSTMs dropout_2 (Dropout) (50, 128) 0 _________________________________________________________________ dense_1 (Dense) (50, 1) 129 ================================================================= Total params: 12,220,929 Trainable params: 12,220,929 Non-trainable params: 0 _________________________________________________________________ If you run the model shown above with no other changes, you will see a boost in the accuracy and precision of the model: bilstm.fit(encoded_train_batched, epochs=5) Epoch 1/5 500/500 [==============================] - 80s 160ms/step - loss: 0.3731 - accuracy: 0.8270 - Precision: 0.8186 - Recall: 0.8401 … Epoch 5/5 500/500 [==============================] - 70s 139ms/step - loss: 0.0316 - accuracy: 0.9888 - Precision: 0.9886 - Recall: 0.9889 bilstm.evaluate(encoded_test.batch(BATCH_SIZE)) 500/Unknown - 20s 40ms/step - loss: 0.7280 - accuracy: 0.8389 Precision: 0.8650 - Recall: 0.8032 Note that the model is severely overfitting. It is important to add some form of regularization to the model. Out of the box, with no feature engineering or use of the unsupervised data for learning better embeddings, the accuracy of the model is above 83.5%. The current state-of-the-art results on this data, published in August 2019, have an accuracy of 97.42%. Some ideas that can be tried to improve this model include stacking layers of LSTMs or BiLSTMs, with some dropout for regularization, using the unsupervised split of the dataset along with training and testing review text data to learn better embeddings and using those in the final network, adding more features such as word shapes, and POS tags, among others. We will pick up this example again in Chapter 4, Transfer Learning with BERT, when we discuss language models such as BERT. Maybe this example will be an inspiration for you to try your own model and publish a paper with your state-of-the-art results! Note that BiLSTMs, while powerful, may not be suitable for all applications. Using a BiLSTM architecture assumes that the entire text or sequence is available at the same time. This assumption may not be true in some cases. [ 68 ] Chapter 2 In the case of the speech recognition of commands in a chatbot, only the sounds spoken so far by the users are available. It is not known what words a user is going to utter in the future. In real-time time-series analytics, only data from the past is available. In such applications, BiLSTMs cannot be used. Also, note that RNNs really shine with very large amounts of data training over several epochs. The IMDb dataset with 25,000 training examples is on the smaller side for RNNs to show their power. You may find you achieve similar or better results using TF-IDF and logistic regression with some feature engineering. Summary This is a foundational chapter in our journey through advanced NLP problems. Many advanced models use building blocks such as BiRNNs. First, we used the TensorFlow Datasets package to load data. Our work of building a vocabulary, tokenizer, and encoder for vectorization was simplified through the use of this library. After understanding LSTMs and BiLSTMs, we built models to do sentiment analysis. Our work showed promise but was far away from the state-of-the-art results, which will be addressed in future chapters. However, we are now armed with the fundamental building blocks that will enable us to tackle more challenging problems. Armed with this knowledge of LSTMs, we are ready to build our first NER model using BiLSTMs in the next chapter. Once this model is built, we will try to improve it using CRFs and Viterbi decoding. [ 69 ] 3 Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding One of the fundamental building blocks of NLU is Named Entity Recognition (NER). The names of people, companies, products, and quantities can be tagged in a piece of text with NER, which is very useful in chatbot applications and many other use cases in information retrieval and extraction. NER will be the main focus of this chapter. Building and training a model capable of doing NER requires several techniques, such as Conditional Random Fields (CRFs) and Bi-directional LSTMs (BiLSTMs). Advanced TensorFlow techniques like custom layers, losses, and training loops are also used. We will build on the knowledge of BiLSTMs gained from the previous chapter. Specifically, the following will be covered: • Overview of NER • Building an NER tagging model with BiLSTM • CRFs and Viterbi algorithms • Building a custom Keras layer for CRFs • Building a custom loss function in Keras and TensorFlow • Training a model with a custom training loop It all starts with understanding NER, which is the focus of the next section. [ 71 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Named Entity Recognition Given a sentence or a piece of text, the objective of an NER model is to locate and classify text tokens as named entities in categories such as people's names, organizations and companies, physical locations, quantities, monetary quantities, times, dates, and even protein or DNA sequences. NER should tag the following sentence: Ashish paid Uber $80 to go to the Twitter offices in San Francisco. as follows: [Ashish]PER paid [Uber]ORG [$80]MONEY to go the [Twitter]ORG offices in [San Francisco]LOC. Here is an example from the Google Cloud Natural Language API, with several additional classes: Figure 3.1: An NER example from the Google Cloud Natural Language API [ 72 ] Chapter 3 The most common tags are listed in the table below: Type Example Tag Example Person PER Gregory went to the castle. Organization ORG WHO just issued an epidemic advisory. Location LOC She lives in Seattle. Money MONEY You owe me twenty dollars. Percentage PERCENT Stocks have risen 10% today. Date DATE Let's meet on Wednesday. Time TIME Is it 5 pm already? There are different data sets and tagging schemes that can be used to train NER models. Different data sets will have different subsets of the tags listed above. In other domains, there may be additional tags specific to the domain. The Defence Science Technology Laboratory in the UK created a data set called re3d (https:// github.com/dstl/re3d), which has entity types such as vehicle (Boeing 777), weapon (rifle), and military platform (tank). The availability of adequately sized labeled data sets in various languages is a significant challenge. Here is a link to a good collection of NER data sets:. In many use cases, you will need to spend a lot of time collecting and annotating data. For example, if you are building a chatbot for ordering pizza, the entities could be bases, sauces, sizes, and toppings. There are a few different ways to build an NER model. If the sentence is considered a sequence, then this task can be modeled as a word-by-word labeling task. Hence, models similar to the models used for Part of Speech (POS) tagging are applicable. Features can be added to a model to improve labeling. The POS of a word and its neighboring words are the most straightforward features to add. Word shape features that model lowercase letters can add a lot of information, principally because a lot of the entity types deal with proper nouns, such as those for people and organizations. Organization names can be abbreviated. For example, the World Health Organization can be represented as WHO. Note that this feature will only work in languages that distinguish between lowercase and uppercase letters. Another vital feature involves checking a word in a gazetteer. A gazetteer is like a database of important geographical entities. See geonames.org for an example of a data set licensed under Creative Commons. A set of people's names in the USA can be sourced from the US Social Security Administration at. gov/oact/babynames/state/namesbystate.zip. The linked ZIP file has the names of people born in the United States since 1910, grouped by state. Similarly, Dunn and Bradstreet, popularly known as D&B, offers a data set of companies with over 200 million businesses across the world that can be licensed. The biggest challenge with this approach is the complexity of maintaining these lists over time. [ 73 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding In this chapter, we will focus on a model that does not rely on additional external data on top of labelled data for training, like a gazetteer, and also has no dependence on hand-crafted features. We will try to get to as high a level of accuracy as possible using deep neural networks and some additional techniques. The model we will use will be a combination of BiLSTM and a CRF on top. This model is based on the paper titled Neural Architectures for Named Entity Recognition, written by Guillaume Lample et al. and presented at the NAACL-HTL conference in 2016. This paper was state of the art in 2016 with an F1 score of 90.94. Currently, the SOTA has an F1-score of 93.5, where the model uses extra training data. These numbers are measured on the CoNLL 2003 English data set. The GMB data set will be used in this chapter. The next section describes this data set. The GMB data set With all the basics in the bag, we are ready to build a model that classifies NERs. For this task, the Groningen Meaning Bank (GMB) data set will be used. This dataset is not considered a gold standard. This means that this data set is built using automatic tagging software, followed by human raters updating subsets of the data. However, this is a very large and rich data set. This data has a lot of useful annotations that make it quite suitable for training models. It is also constructed from public domain text, making it easy to use for training. The following named entities are tagged in this corpus: • geo = Geographical entity • org = Organization • per = Person • gpe = Geopolitical entity • tim = Time indicator • art = Artifact • eve = Event • nat = Natural phenomenon In each of these categories, there can be subcategories. For example, tim may be further sub-divided and represented as tim-dow representing a time entity corresponding to a day of the week, or tim-dat, which represents a date. For this exercise, these sub-entities are going to be aggregated into the eight top-level entities listed above. The number of examples varies widely between the sub-entities. Consequently, the accuracy varies widely due to the lack of enough training data for some of these subcategories. [ 74 ] Chapter 3 The data set also provides the NER entity for each word. In many cases, an entity may comprise multiple words. If Hyde Park is a geographical entity, both words will be tagged as a geo entity. In terms of training models for NER, there is another way to represent this data that can have a significant impact on the accuracy of the model. This requires the usage of the BIO tagging scheme. In this scheme, the first word of an entity, single word or multi-word, is tagged with B-{entity tag}. If the entity is multi-word, each successive word would be tagged as I-{entity tag}. In the example above, Hyde Park would be tagged as B-geo I-geo. All these are steps of pre-processing that are required for a data set. All the code for this example can be found in the NER with BiLSTM and CRF.ipynb notebook in the chapter3-ner-with-lstm-crf folder of the GitHub repository. Let's get started by loading and processing the data. Data can be downloaded from the University of Groningen website as follows: # alternate: download the file from the browser and put # in the same directory as this notebook !wget !unzip gmb-2.2.0.zip Please note that the data is quite large – over 800MB. If wget is not available on your system, you may use any other tool such as, curl or a browser to download the data set. This step may take some time to complete. If you have a challenge accessing the data set from the University server, you may download a copy from Kaggle:. Also note that since we are going to be working on large data sets, some of the following steps may take some time to execute. In the world of Natural Language Processing (NLP), more training data and training time is key to great results. All the code for this example can be found in the NER with BiLSTM and CRF.ipynb notebook in the chapter3-ner-withlstm-crf folder of the GitHub repository. The data unzips into the gmb-2.2.0 folder. The data subfolder has a number of subfolders with different files. README supplied with the data set provides details about the various files and their contents. For this example, we will be using only files named en.tags in various subdirectories. These files are tab-separated files with each word of a sentence in a row. [ 75 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding There are ten columns of information: • The token itself • A POS tag as used in the Penn Treebank ( treebank/doc/tagguide.ps.gz) • A lemma • A named-entity tag, or 0 if none • A WordNet word sense number for the respective lemma-POS combinations, or 0 if not applicable () • For verbs and prepositions, a list of the VerbNet roles of the arguments in order of combination in the Combinatory Categorial Grammar (CCG) derivation, or [] if not applicable ( projects/verbnet.html) • Semantic relation in noun-noun compounds, possessive apostrophes, temporal modifiers, and so on. Indicated using a preposition, or 0 if not applicable • An animacy tag as proposed by Zaenen et al. (2004), or 0 if not applicable () • A supertag (lexical category of CCG) • The lambda-DRS representing the semantics of the token in Boxer's Prolog format Out of these fields, we are going to use only the token and the named entity tag. However, we will work through loading the POS tag for a future exercise. The following code gets all the paths for these tags files: import os data_root = './gmb-2.2.0/data/' fnames = [] for root, dirs, files in os.walk(data_root): for filename in files: if filename.endswith(".tags"): fnames.append(os.path.join(root, filename)) fnames[:2] ['./gmb-2.2.0/data/p57/d0014/en.tags', './gmb-2.2.0/data/p57/d0382/ en.tags'] [ 76 ] Chapter 3 A few processing steps need to happen. Each file has a number of sentences, with each words in a row. The entire sentence as a sequence and the corresponding sequence of NER tags need to be fed in as inputs while training the model. As mentioned above, the NER tags also need to be simplified to the top-level entities only. Secondly, the NER tags need to be converted to the IOB format. IOB stands for In-Other-Begin. These letters are used as a prefix to the NER tag. The sentence fragment in the table below shows how this scheme works: Reverend Terry Jones arrived in New York B-per I-per I-per O O B-geo I-geo The table above shows this tagging scheme after processing. Note that New York is one location. As soon as New is encountered, it marks the start of the geo NER tag, hence it is assigned B-geo. The next word is York, which is a continuation of the same geographical entity. For any network, classifying the word New as the start of the geographical entity is going to be very challenging. However, a BiLSTM network would be able to see the succeeding words, which helps quite a bit with disambiguation. Furthermore, the advantage of IOB tags is that the accuracy of the model improves considerably in terms of detection. This happens because once the beginning of an NER tag is detected, the choices for the next tag become quite limited. Let's get to the code. First, create a directory to store all the processed files: !mkdir ner We want to process the tags so that we strip the subcategories of the NER tags out. It would also be nice to collect some stats on the types of tags in the documents: import csv import collections ner_tags = collections.Counter() iob_tags = collections.Counter() def strip_ner_subcat(tag): # NER tags are of form {cat}-{subcat} # eg tim-dow. We only want first part return tag.split("-")[0] [ 77 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding The NER tag and IOB tag counters are set up above. A method for stripping the subcategory out of the NER tags is defined. The next method takes a sequence of tags and converts them into IOB format: def iob_format(ners): # converts IO tags into IOB format # input is a sequence of IO NER tokens # convert this: O, PERSON, PERSON, O, O, LOCATION, O # into: O, B-PERSON, I-PERSON, O, O, B-LOCATION, O iob_tokens = [] for idx, token in enumerate(ners): if token != 'O': # !other if idx == 0: token = "B-" + token #start of sentence elif ners[idx-1] == token: token = "I-" + token # continues else: token = "B-" + token iob_tokens.append(token) iob_tags[token] += 1 return iob_tokens Once these two convenience functions are ready, all the tags files need to be read and processed: total_sentences = 0 outfiles = [] for idx, file in enumerate(fnames): with open(file, 'rb') as content: data = content.read().decode('utf-8').strip() sentences = data.split("\n\n") print(idx, file, len(sentences)) total_sentences += len(sentences) with open("./ner/"+str(idx)+"-"+os.path.basename(file), 'w') as outfile: outfiles.append("./ner/"+str(idx)+"-"+ os.path.basename(file)) writer = csv.writer(outfile) for sentence in sentences: toks = sentence.split('\n') words, pos, ner = [], [], [] [ 78 ] Chapter 3 for tok in toks: t = tok.split("\t") words.append(t[0]) pos.append(t[1]) ner_tags[t[3]] += 1 ner.append(strip_ner_subcat(t[3])) writer.writerow([" ".join(words), " ".join(iob_format(ner)), " ".join(pos)]) First, a counter is set for the number of sentences. A list of files written with paths are also initialized. As processed files are written out, their paths are added to the outfiles variable. This list will be used later to load all the data and to train the model. Files are read and split into two empty newline characters. That is the marker for the end of a sentence in the file. Only the actual words, POS tokens, and NER tokens are used from the file. Once these are collected, a new CSV file is written with three columns: the sentence, a sequence of POS tags, and a sequence of NER tags. This step may take a little while to execute: print("total number of sentences: ", total_sentences) total number of sentences: 62010 To confirm the distribution of the NER tags before and after processing, we can use the following code: print(ner_tags) print(iob_tags) Counter({'O': 1146068, 'geo-nam': 58388, 'org-nam': 48034, 'per-nam': 23790, 'gpe-nam': 20680, 'tim-dat': 12786, 'tim-dow': 11404, 'per-tit': 9800, 'per-fam': 8152, 'tim-yoc': 5290, 'tim-moy': 4262, 'per-giv': 2413, 'tim-clo': 891, 'art-nam': 866, 'eve-nam': 602, 'nat-nam': 300, 'tim-nam': 146, 'eve-ord': 107, 'org-leg': 60, 'per-ini': 60, 'perord': 38, 'tim-dom': 10, 'art-add': 1, 'per-mid': 1}) Counter({'O': 1146068, 'B-geo': 48876, 'B-tim': 26296, 'B-org': 26195, 'I-per': 22270, 'B-per': 21984, 'I-org': 21899, 'B-gpe': 20436, 'I-geo': 9512, 'I-tim': 8493, 'B-art': 503, 'B-eve': 391, 'I-art': 364, 'I-eve': 318, 'I-gpe': 244, 'B-nat': 238, 'I-nat': 62}) As is evident, some tags were very infrequent, like tim-dom. It would be next to impossible for a network to learn them. Aggregating up one level helps increase the signal for these tags. To check if the entire process completed properly, check that the ner folder has 10,000 files. Now, let us load the processed data to normalize, tokenize, and vectorize it. [ 79 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Normalizing and vectorizing data For this section, pandas and numpy methods will be used. The first step is to load the contents of the processed files into one DataFrame: import glob import pandas as pd # could use `outfiles` param as well files = glob.glob("./ner/*.tags") data_pd = pd.concat([pd.read_csv(f, header=None, names=["text", "label", "pos"]) for f in files], ignore_index = True) This step may take a while given that it is processing 10,000 files. Once the content is loaded, we can check the structure of the DataFrame: data_pd.info() RangeIndex: 62010 entries, 0 to 62009 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----0 text 62010 non-null object 1 label 62010 non-null object 2 pos 62010 non-null object dtypes: object(3) memory usage: 1.4+ MB Both the text and NER tags need to be tokenized and encoded into numbers for use in training. We are going to be using core methods provided by the keras. preprocessing package. First, the tokenizer will be used to tokenize the text. In this example, the text only needs to be tokenized by white spaces, as it has been broken up already: ### Keras tokenizer from tensorflow.keras.preprocessing.text import Tokenizer text_tok = Tokenizer(filters='[\\]^\t\n', lower=False, split=' ', oov_token='') pos_tok = Tokenizer(filters='\t\n', lower=False, split=' ', oov_token='') [ 80 ] Chapter 3 ner_tok = Tokenizer(filters='\t\n', lower=False, split=' ', oov_token='') The default values for the tokenizer are quite reasonable. However, in this particular case, it is important to only tokenize on spaces and not clean the special characters out. Otherwise the data will become mis-formatted: text_tok.fit_on_texts(data_pd['text']) pos_tok.fit_on_texts(data_pd['pos']) ner_tok.fit_on_texts(data_pd['label']) Even though we do not use the POS tags, the processing for them is included. Use of the POS tags can have an impact on the accuracy of an NER model. Many NER entities are nouns, for example. However, we will see how to process POS tags but not use them in the model as features. This is left as an exercise to the reader. This tokenizer has some useful features. It provides a way to restrict the size of the vocabulary by word counts, TF-IDF, and so on. If the num_words parameter is passed with a numeric value, the tokenizer will limit the number of tokens by word frequencies to that number. The fit_on_texts method takes in all the texts, tokenizes them, and constructs dictionaries with tokens that will be used later to tokenize and encode in one go. A convenience function, get_config(), can be called after the tokenizer has been fit on texts to provide information about the tokens: ner_config = ner_tok.get_config() text_config = text_tok.get_config() print(ner_config) {'num_words': None, 'filters': '\t\n', 'lower': False, 'split': ' ', 'char_level': False, 'oov_token': '', 'document_count': 62010, 'word_counts': '{"B-geo": 48876, "O": 1146068, "I-geo": 9512, "B-per": 21984, "I-per": 22270, "B-org": 26195, "I-org": 21899, "B-tim": 26296, "I-tim": 8493, "B-gpe": 20436, "B-art": 503, "B-nat": 238, "B-eve": 391, "I-eve": 318, "I-art": 364, "I-gpe": 244, "I-nat": 62}', 'word_ docs': '{"I-geo": 7738, "O": 61999, "B-geo": 31660, "B-per": 17499, "I-per": 13805, "B-org": 20478, "I-org": 11011, "B-tim": 22345, "I-tim": 5526, "B-gpe": 16565, "B-art": 425, "B-nat": 211, "I-eve": 201, "B-eve": 361, "I-art": 207, "I-gpe": 224, "I-nat": 50}', 'index_ docs': '{"10": 7738, "2": 61999, "3": 31660, "7": 17499, "6": 13805, "5": 20478, "8": 11011, "4": 22345, "11": 5526, "9": 16565, "12": 425, "17": 211, "15": 201, "13": 361, "14": 207, "16": 224, "18": 50}', [ 81 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding 'index_word': '{"1": "", "2": "O", "3": "B-geo", "4": "B-tim", "5": "B-org", "6": "I-per", "7": "B-per", "8": "I-org", "9": "B-gpe", "10": "I-geo", "11": "I-tim", "12": "B-art", "13": "B-eve", "14": "I-art", "15": "I-eve", "16": "I-gpe", "17": "B-nat", "18": "I-nat"}', 'word_index': '{"": 1, "O": 2, "B-geo": 3, "B-tim": 4, "B-org": 5, "I-per": 6, "B-per": 7, "I-org": 8, "B-gpe": 9, "I-geo": 10, "I-tim": 11, "B-art": 12, "B-eve": 13, "I-art": 14, "I-eve": 15, "I-gpe": 16, "B-nat": 17, "I-nat": 18}'} The index_word dictionary property in the config provides a mapping between IDs and tokens. There is a considerable amount of information in the config. The vocabularies can be obtained from the config: text_vocab = eval(text_config['index_word']) ner_vocab = eval(ner_config['index_word']) print("Unique words in vocab:", len(text_vocab)) print("Unique NER tags in vocab:", len(ner_vocab)) Unique words in vocab: 39422 Unique NER tags in vocab: 18 Tokenizing and encoding text and named entity labels is quite easy: x_tok = text_tok.texts_to_sequences(data_pd['text']) y_tok = ner_tok.texts_to_sequences(data_pd['label']) Since sequences are of different sizes, they will all be padded or truncated to a size of 50 tokens. A helper function is used for this task: # now, pad sequences to a maximum length from tensorflow.keras.preprocessing import sequence max_len = 50 x_pad = sequence.pad_sequences(x_tok, padding='post', maxlen=max_len) y_pad = sequence.pad_sequences(y_tok, padding='post', maxlen=max_len) print(x_pad.shape, y_pad.shape) (62010, 50) (62010, 50) [ 82 ] Chapter 3 The last step above is to ensure that shapes are correct before moving to the next step. Verifying shapes is a very important part of developing code in TensorFlow. There is an additional step that needs to be performed on the labels. Since there are multiple labels, each label token needs to be one-hot encoded like so: num_classes = len(ner_vocab) + 1 Y = tf.keras.utils.to_categorical(y_pad, num_classes=num_classes) Y.shape (62010, 50, 19) Now, we are ready to build and train a model. A BiLSTM model The first model we will try is a BiLSTM model. First, the basic constants need to be set up: # Length of the vocabulary vocab_size = len(text_vocab) + 1 # The embedding dimension embedding_dim = 64 # Number of RNN units rnn_units = 100 #batch size BATCH_SIZE=90 # num of NER classes num_classes = len(ner_vocab)+1 Next, a convenience function for instantiating models is defined: from tensorflow.keras.layers import Embedding, Bidirectional, LSTM, TimeDistributed, Dense dropout=0.2 def build_model_bilstm(vocab_size, embedding_dim, rnn_units, batch_ size, classes): [ 83 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding model = tf.keras.Sequential([ Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None]), Bidirectional(LSTM(units=rnn_units, return_sequences=True, dropout=dropout, kernel_initializer=\ tf.keras.initializers.he_normal())), TimeDistributed(Dense(rnn_units, activation='relu')), Dense(num_classes, activation="softmax") ]) We are going to train our own embeddings. The next chapter will talk about using pre-trained embeddings and using them in models. After the embedding layer, there is a BiLSTM layer, followed by a TimeDistributed dense layer. This last layer is different from the sentiment analysis model, where there was only a single unit for binary output. In this problem, for each word in the input sequence, an NER token needs to be predicted. So, the output has as many tokens as the input sequence. Consequently, output tokens correspond 1-to-1 with input tokens and are classified as one of the NER classes. The TimeDistributed layer provides this capability. The other thing to note in this model is the use of regularization. It is important that the model does not overfit the training data. Since LSTMs have high model capacity, using regularization is very important. Feel free to play with some of these hyperparameters to get a feel for how the model will react. Now the model can be compiled: model = build_model_bilstm( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE, classes=num_classes) model.summary() model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) Model: "sequential_1" Layer (type) Output Shape Param # ================================================================= embedding_9 (Embedding) (90, None, 64) 2523072 _________________________________________________________________ [ 84 ] Chapter 3 bidirectional_9 (Bidirection (90, None, 200) 132000 _________________________________________________________________ time_distributed_6 (TimeDist (None, None, 100) 20100 _________________________________________________________________ dense_16 (Dense) (None, None, 19) 1919 ================================================================= Total params: 2,677,091 Trainable params: 2,677,091 Non-trainable params: 0 _________________________________________________________________ This simplistic model has over 2.6 million parameters! If you notice, the bulk of the parameters are coming from the size of the vocabulary. The vocabulary has 39,422 words. This increases the model training time and computational capacity required. One way to reduce this is to make the vocabulary size smaller. The easiest way to do this would be to only consider words that have more than a certain frequency of occurrence or to remove words smaller than a certain number of characters. The vocabulary can also be reduced by converting all characters to lower case. However, in NER, case is a very important feature. This model is ready for training. The last thing that is needed is to split the data into train and test sets: # to enable TensorFlow to process sentences properly X = x_pad # create training and testing splits total_sentences = 62010 test_size = round(total_sentences / BATCH_SIZE * 0.2) X_train = X[BATCH_SIZE*test_size:] Y_train = Y[BATCH_SIZE*test_size:] X_test = X[0:BATCH_SIZE*test_size] Y_test = Y[0:BATCH_SIZE*test_size] Now, the model is ready for training: model.fit(X_train, Y_train, batch_size=BATCH_SIZE, epochs=15) Train on 49590 samples Epoch 1/15 [ 85 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding 49590/49590 [==============================] - 20s 409us/sample - loss: 0.1736 - accuracy: 0.9113 ... Epoch 8/15 49590/49590 [==============================] - 15s 312us/sample - loss: 0.0153 - accuracy: 0.9884 ... Epoch 15/15 49590/49590 [==============================] - 15s 312us/sample - loss: 0.0065 - accuracy: 0.9950 Over 15 epochs of training, the model is doing quite well with over 99% accuracy. Let's see how the model performs on the test set and whether the regularization helped: model.evaluate(X_test, Y_test, batch_size=BATCH_SIZE) 12420/12420 [==============================] - 3s 211us/sample - loss: 0.0926 - accuracy: 0.9624 The model performs well on the test data set, with over 96.5% accuracy. The difference between the train and test accuracies is still there, implying that the model could use some additional regularization. You can play with the dropout variable or add additional dropout layers between the embedding and BiLSTM layers, and between the TimeDistributed layer and the final Dense layer. Here is an example of a sentence fragment tagged by this model: Faure Gnassingbe said in a speech carried by state media Friday Actual B-per I-per O O O O O O O O B-tim Model B-per I-per O O O O O O O O B-tim This model is not doing poorly at all. It was able to identify the person and time entities in the sentence. As good as this model is, it does not use an important characteristic of named entity tags – a given tag is highly correlated with the tag coming after it. CRFs can take advantage of this information and further improve the accuracy of NER tasks. Let's understand how CRFs work and add them to the network above next. [ 86 ] Chapter 3 Conditional random fields (CRFs) BiLSTM models look at a sequence of input words and predict the label for the current word. In making this determination, only the information of previous inputs is considered. Previous predictions play no role in making this decision. However, there is information encoded in the sequence of labels that is being discounted. To illustrate this point, consider a subset of NER tags: O, B-Per, I-Per, B-Geo, and I-Geo. This represents two domains of person and geographical entities and an Other category for everything else. Based on the structure of IOB tags, we know that any I- tag must be preceded by a B-I from the same domain. This also implies that an I- tag cannot be preceded by an O tag. The following diagram shows the possible state transitions between these tags: Figure 3.2: Possible NER tag transitions Figure 3.2 color codes similar types of transitions with the same color. An O tag can transition only to a B tag. A B tag can go to its corresponding I tag or back to the O tag. An I tag can transition back to itself, an O tag, or a B tag of a different domain (not represented in the diagram for simplicity). For a set of N tags, these transitions can be represented by a matrix of dimension N x N. Pi,j denotes the possibility of tag j coming after tag i. Note that these transition weights can be learned based on the data. Such a learned transition weights matrix could be used during prediction to consider the entire sequence of predicted labels and make updates to the probabilities. [ 87 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Here is an illustrative matrix with indicative transition weights: From > To O B-Geo I-Geo B-Org I-Org O 3.28 2.20 0.00 3.66 0.00 B-Geo -0.25 -0.10 4.06 0.00 0.00 I-Geo -0.17 -0.61 3.51 0.00 0.00 B-Org -0.10 -0.23 0.00 -1.02 4.81 I-Org -0.33 -1.75 0.00 -1.38 5.10 As per the table above, the weight of the edge connecting I-Org to B-Org has a weight of -1.38, implying that this transition is extremely unlikely to happen. Practically, implementing a CRF has three main steps. The first step is modifying the score generated by the BiLSTM layer and accounting for the transition weights, as shown above. A sequence of predictions 𝒚𝒚 𝒚 𝒚𝒚𝒚1 , 𝑦𝑦2 , … , 𝑦𝑦𝑛𝑛 ) generated by the BiLSTM layer above for a sequence of n tags in the space of k unique tags is available, which operates on an input sequence X. P represents a matrix of dimensions n × k, where the element Pi,j represents the probability of jth tag for output at the position yi. Let A be a square matrix of transition probabilities as shown above, with a dimension of (k + 2) × (k + 2) where two additional tokens are added for start- and end-of-sentence markers. Element Ai,j represents the transition probability from i to tag j. Using these values, a new score can be calculated like so: 𝑛𝑛 𝑛𝑛 𝑖𝑖𝑖𝑖 𝑖𝑖𝑖𝑖 𝑠𝑠(𝑿𝑿𝑿 𝑿𝑿) = ∑ 𝐴𝐴𝑦𝑦𝑖𝑖 ,𝑦𝑦𝑖𝑖𝑖𝑖 + ∑ 𝑃𝑃𝑖𝑖𝑖𝑖𝑖𝑖𝑖 [ 88 ] Chapter 3 A softmax can be calculated over all possible tag sequences to get the probability for a given sequence y: 𝑝𝑝(𝒚𝒚|𝑿𝑿) = 𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 ∑ 𝑦𝑦𝑦 𝑦 𝑦𝑦𝑋𝑋 𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠̃) YX represents all possible tag sequences, including those that may not conform to the IOB tag format. To train using this softmax, a log-likelihood can be calculated over this. Through clever use of dynamic programming, a combinatorial explosion can be avoided, and the denominator can be computed quite efficiently. Only simplistic math is shown to help build an intuition of how this method works. The actual computations will become clear in the custom layer implementation below. While decoding, the output sequence is the one that has the maximum score among these possible sequences, calculated conceptually using an argmax style function. The Viterbi algorithm is commonly used to implement a dynamic programming solution for decoding. First, let us code the model and the training for it before getting into decoding. NER with BiLSTM and CRFs Implementing a BiLSTM network with CRFs requires adding a CRF layer on top of the BiLSTM network developed above. However, a CRF is not a core part of the TensorFlow or Keras layers. It is available through the tensorflow_addons or tfa package. The first step is to install this package: !pip install tensorflow_addons==0.11.2 [ 89 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding There are many sub-packages, but the convenience functions for the CRF are in the tfa.text subpackage: Figure 3.3: tfa.text methods While low-level methods for implementing the CRF layer are provided, a high-level layer-like construct is not provided. The implementation of a CRF requires a custom layer, a loss function, and a training loop. Post training, we will look at how to implement a customized inference function that will use Viterbi decoding. [ 90 ] Chapter 3 Implementing the custom CRF layer, loss, and model Similar to the flow above, there will be an embedding layer and a BiLSTM layer. The output of the BiLSTM needs to be evaluated with the CRF log-likelihood loss described above. This is the loss that needs to be used to train the model. The first step in implementation is creating a custom layer. Implementing a custom layer in Keras requires subclassing keras.layers.Layer. The main method to be implemented is call(), which takes inputs to the layer, transforms them, and returns the result. Additionally, the constructor to the layer can also set up any parameters that are needed. Let's start with the constructor: from tensorflow.keras.layers import Layer from tensorflow.keras import backend as K class CRFLayer(Layer): """ Computes the log likelihood during training Performs Viterbi decoding during prediction """ def __init__(self, label_size, mask_id=0, trans_params=None, name='crf', **kwargs): super(CRFLayer, self).__init__(name=name, **kwargs) self.label_size = label_size self.mask_id = mask_id self.transition_params = None if trans_params is None: # not reloading pretrained params self.transition_params = tf.Variable( tf.random.uniform(shape=(label_size, label_size)), trainable=False) else: self.transition_params = trans_params [ 91 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding The main parameters that are needed are: • The number of labels and the transition matrix: As described in the section above, a transition matrix needs to be learned. The dimension of that square matrix is the number of labels. The transition matrix is initialized using the parameters. This transition parameters matrix is not trainable through gradient descent. It is calculated as a consequence of computing the loglikelihoods. The transition parameters matrix can also be passed into this layer if it has been learned in the past. • The mask id: Since the sequences are padded, it is important to recover the original sequence lengths for computing transition scores. By convention, a value of 0 is used for the mask, and that is the default. This parameter is set up for future configurability. The second method is to compute the result of applying this layer. Note that as a layer, the CRF layer merely regurgitates the outputs during training time. The CRF layer is useful only during inference. At inference time, it uses the transition matrix and logic to correct the sequences' output by the BiLSTM layers before returning them. For now, this method is quite simple: def call(self, inputs, seq_lengths, training=None): if training is None: training = K.learning_phase() # during training, this layer just returns the logits if training: return inputs return inputs # to be replaced later This method takes the inputs as well as a parameter that helps specify if this method is called during training or during inference. If this variable is not passed, it is pulled from the Keras backend. When models are trained with the fit() method, learning_ phase() returns True. When the .predict() method is called on a model, this flag is set to false. As sequences being passed are masked, this layer needs to know the real sequence lengths during inference time for decoding. A variable is passed for it but is unused at this time. Now that the basic CRF layer is ready, let's build the model. [ 92 ] Chapter 3 A custom CRF model Since the model builds on a number of preexisting layers in addition to the custom CRF layer above, explicit imports help the readability of the code: from tensorflow.keras import from tensorflow.keras.layers TimeDistributed from tensorflow.keras.layers from tensorflow.keras import Model, Input, Sequential import LSTM, Embedding, Dense, import Dropout, Bidirectional backend as K The first step is to define a constructor that will create the various layers and store the appropriate dimensions: class NerModel(tf.keras.Model): def __init__(self, hidden_num, vocab_size, label_size, embedding_size, name='BilstmCrfModel', **kwargs): super(NerModel, self).__init__(name=name, **kwargs) self.num_hidden = hidden_num self.vocab_size = vocab_size self.label_size = label_size self.embedding = Embedding(vocab_size, embedding_size, mask_zero=True, name="embedding") self.biLSTM =Bidirectional(LSTM(hidden_num, return_sequences=True), name="bilstm") self.dense = TimeDistributed(tf.keras.layers.Dense( label_size), name="dense") self.crf = CRFLayer(self.label_size, name="crf") This constructor takes in the number of hidden units for the BiLSTM later, the size of words in the vocabulary, the number of NER labels, and the size of the embeddings. Additionally, a default name is set by the constructor, which can be overridden at the time of instantiation. Any additional parameters supplied are passed along as keyword arguments. [ 93 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding During training and prediction, the following method will be called: def call(self, text, labels=None, training=None): seq_lengths = tf.math.reduce_sum( tf.cast(tf.math.not_equal(text, 0), dtype=tf.int32), axis=-1) if training is None: training = K.learning_phase() inputs = self.embedding(text) bilstm = self.biLSTM(inputs) logits = self.dense(bilstm) outputs = self.crf(logits, seq_lengths, training) return outputs So, in a few lines of code, we have implemented a customer model using the custom CRF layer developed above. The only thing that we need now to train this model is a loss function. A custom loss function for NER using a CRF Let's implement the loss function as part of the CRF layer, encapsulated in a function of the same name. Note that when this function is called, it is usually passed the labels and predicted values. We will model our loss function on the custom loss functions in TensorFlow. Add this code to the CRF layer class: def loss(self, y_true, y_pred): y_pred = tf.convert_to_tensor(y_pred) y_true = tf.cast(self.get_proper_labels(y_true), y_pred.dtype) seq_lengths = self.get_seq_lengths(y_true) log_likelihoods, self.transition_params =\ tfa.text.crf_log_likelihood(y_pred, y_true, seq_lengths) # save transition params self.transition_params = tf.Variable(self.transition_params, trainable=False) # calc loss loss = - tf.reduce_mean(log_likelihoods) return loss [ 94 ] Chapter 3 This function takes the true labels and predicted labels. Both of these tensors are usually of the shape (batch size, max sequence length, number of NER labels). However, the log-likelihood function in the tfa package expects the labels to be in a (batch size, max sequence length)-shaped tensor. So a convenience function, implemented as part of the CRF layer and shown below, is used to perform the conversion of label shapes: def get_proper_labels(self, y_true): shape = y_true.shape if len(shape) > 2: return tf.argmax(y_true, -1, output_type=tf.int32) return y_true The log-likelihood function also requires the actual sequence lengths for each example. These sequence lengths can be computed from the labels and the mask identifier that was set up in the constructor of this layer (see above). This process is encapsulated in another convenience function, also part of the CRF layer: def get_seq_lengths(self, matrix): # matrix is of shape (batch_size, max_seq_len) mask = tf.not_equal(matrix, self.mask_id) seq_lengths = tf.math.reduce_sum( tf.cast(mask, dtype=tf.int32), axis=-1) return seq_lengths First, a Boolean mask is generated from the labels by comparing the value of the label to the mask ID. Then, through casting the Boolean as an integer and summing across the row, the length of the sequence is regenerated. Now, the tfa.text.crf_ log_likelihood() function is called to calculate and return the log-likelihoods and the transition matrix. The CRF layer's transition matrix is updated with the transition matrix returned from the function call. Finally, the loss is computed by summing up all the log-likelihoods returned. At this point, our coded custom model is ready to start training. We will need to set up the data and create a custom training loop. Implementing custom training The model needs to be instantiated and initialized for training: # Length of the vocabulary vocab_size = len(text_vocab) + 1 [ 95 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding # The embedding dimension embedding_dim = 64 # Number of RNN units rnn_units = 100 #batch size BATCH_SIZE=90 # num of NER classes num_classes = len(ner_vocab) + 1 blc_model = NerModel(rnn_units, vocab_size, num_classes, embedding_dim, dynamic=True) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) As in past examples, an Adam optimizer will be used. Next, we will construct tf.data.DataSet from the DataFrames loaded in the BiLSTM section above: # create training and testing splits total_sentences = 62010 test_size = round(total_sentences / BATCH_SIZE * 0.2) X_train = x_pad[BATCH_SIZE*test_size:] Y_train = Y[BATCH_SIZE*test_size:] X_test = x_pad[0:BATCH_SIZE*test_size] Y_test = Y[0:BATCH_SIZE*test_size] Y_train_int = tf.cast(Y_train, dtype=tf.int32) train_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train_int)) train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=True) Roughly 20% of the data is reserved for testing. The rest is used for training. To implement a custom training loop, TensorFlow 2.0 exposes a gradient tape. This allows low-level management of the main steps required for training any model with gradient descent. These steps are: 1. Computing the forward pass predictions 2. Computing the loss when these predictions are compared with the labels [ 96 ] Chapter 3 3. Computing the gradients for the trainable parameters based on the loss and then using the optimizer to adjust the weights Let us train this model for 5 epochs and watch the loss as training progresses. Compare this to the 15 epochs of training for the previous model. The custom training loop is shown below: loss_metric = tf.keras.metrics.Mean() epochs = 5 # Iterate over epochs. for epoch in range(epochs): print('Start of epoch %d' % (epoch,)) # Iterate over the batches of the dataset. for step, (text_batch, labels_batch) in enumerate( train_dataset): labels_max = tf.argmax(labels_batch, -1, output_type=tf.int32) with tf.GradientTape() as tape: logits = blc_model(text_batch, training=True) loss = blc_model.crf.loss(labels_max, logits) grads = tape.gradient(loss, blc_model.trainable_weights) optimizer.apply_gradients(zip(grads, blc_model.trainable_weights)) loss_metric(loss) if step % 50 == 0: print('step %s: mean loss = %s' % (step, loss_metric.result())) A metric is created to keep track of the average loss over time. For 5 epochs, inputs and labels are pulled from the training data set, one batch at a time. Using tf.GradientTape() to keep track of the operations, the steps outlined in the bullets above are implemented. Note that we pass the trainable variable manually as this is a custom training loop. Finally, the loss metric is printed every 50th step to show training progress. This yields the results below, which have been abbreviated: Start of epoch 0 step 0: mean loss = tf.Tensor(71.14853, shape=(), dtype=float32) [ 97 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding step 50: mean loss = tf.Tensor(31.064453, shape=(), dtype=float32) ... Start of epoch 4 step 0: mean loss = tf.Tensor(4.4125915, shape=(), dtype=float32) step 550: mean loss = tf.Tensor(3.8311224, shape=(), dtype=float32) Given we implemented a custom training loop, without requiring a compilation of the model, we could not obtain a summary of the model parameters before. To get an idea of the size of the model, a summary can be obtained now: blc_model.summary() Model: "BilstmCrfModel" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) multiple 2523072 _________________________________________________________________ bilstm (Bidirectional) multiple 132000 _________________________________________________________________ dense (TimeDistributed) multiple 3819 _________________________________________________________________ crf (CRFLayer) multiple 361 ================================================================= Total params: 2,659,252 Trainable params: 2,658,891 Non-trainable params: 361 _________________________________________________________________ It is comparable in size to the previous model but has some untrainable parameters. These are coming from the transition matrix. The transition matrix is not learned through gradient descent. Thus, they are classified as non-trainable parameters. However, training loss is hard to interpret. To compute accuracy, we need to implement decoding, which is the focus of the next section. For the moment, let's assume that decoding is available and examine the results of training for 5 epochs. For illustration purposes, here is a sentence from the test set with the results pulled at the end of the first epoch and at the end of five epochs. The example sentence is: Writing in The Washington Post newspaper , Mr. Ushakov also said it is inadmissible to move in the direction of demonizing Russia . [ 98 ] Chapter 3 The corresponding true label is: O O B-org I-org I-org O O B-per B-org O O O O O O O O O O O O B-geo O This is a difficult example for NER with The Washington Post as a three-word organization, where the first word is very common and used in multiple contexts, and the second word is also the name of a geographical location. Also note the imperfect labels of the GMB data set, where the second tag of the name Ushakov is tagged as an organization. At the end of the first epoch of training, the model predicts: O O O B-geo I-org O O B-per I-per O O O O O O O O O O O O B-geo O It gets confused by the organization not being where it expects it to be. It also shows that it hasn't learned the transition probabilities by putting an I-org tag after a B-geo tag. However, it does not make a mistake in the person portion. Unfortunately for the model, it will not get credit for this great prediction of the person tag, and due to imperfect labels, it will still count as a miss. The result after five epochs of training is better than the original: O O B-org I-org I-org O O B-per I-per O O O O O O O O O O O O B-geo O This is a great result, given the limited amount of training we have done. Now, let's see how we can decode the sentence in the CRF layer to get these sequences. The algorithm used for decoding is called the Viterbi decoder. Viterbi decoding A straightforward way to predict the sequence of labels is to output the label that has the highest activation from the previous layers of the network. However, this could be sub-optimal as it assumes that each label prediction is independent of the previous or successive predictions. The Viterbi algorithm is used to take the predictions for each word in the sequence and apply a maximization algorithm so that the output sequence has the highest likelihood. In future chapters, we will see another way of accomplishing the same objective through beam search. Viterbi decoding involves maximizing over the entire sequence as opposed to optimizing at each word of the sequence. To illustrate this algorithm and way of thinking, let's take an example of a sentence of 5 words, and a set of 3 labels. These labels could be O, B-geo, and I-geo as an example. [ 99 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding This algorithm needs the transition matrix values between labels. Recall that this was generated and stored in the custom CRF layer above. Let's say that the matrix looks like so: From > To Mask O B-geo I-geo Mask 0.6 0.3 0.2 0.01 O 0.8 0.5 0.6 0.01 B-geo 0.2 0.4 0.01 0.7 I-geo 0.3 0.4 0.01 0.5 To explain how the algorithm works, the figure shown below will be used: Figure 3.4: Steps in the Viterbi decoder The sentence starts from the left. Arrows from the start of the word to the first token represent the probability of the transition between the two tokens. The numbers on the arrows should match the values in the transition matrix above. Within the circles denoting labels, scores generated by the neural network, the BiLSTM model, in our case, are shown for the first word. These scores need to be added together to give the final score of the words. Note that we switched the terminology from probabilities to scores as normalization is not being performed for this particular example. [ 100 ] Chapter 3 The probability of the first word label Score of O: 0.3 (transition score) + 0.2 (activation score) = 0.5 Score of B-geo: 0.2 (transition score) + 0.3 (activation score) = 0.5 Score of I-geo: 0.01 (transition score) + 0.01 (activation score) = 0.02 At this point, it is equally likely that an O or B-geo tag will be the starting tag. Let's consider the next tag and calculate the scores using the same approach for the following sequences: (O, B-geo) = 0.6 + 0.3 = 0.9 (B-geo, O) = 0.4 + 0.3 = 0.7 (O, I-geo) = 0.01+ 0.25 = 0.26 (B-geo, B-geo) = 0.01 + 0.3 = 0.31 (O, O) = 0.5 + 0.3 = 0.8 (B-geo, I-geo) = 0.7 + 0.25 = 0.95 This process is called the forward pass. It should also be noted, even though this is a contrived example, that activations at a given input may not be the best predictor of the right label for that word once the previous labels have been considered. If the sentence was only two words, then the scores for various sequences could be calculated by summing by each step: (Start, O, B-Geo) = 0.5 + 0.9 = 1.4 (Start, B-Geo, O) = 0.5 + 0.7 = 1.2 (Start, O, O) = 0.5 + 0.8 = 1.3 (Start, B-geo, B-geo) = 0.5 + 0.31 = 0.81 (Start, O, I-Geo) = 0.5 + 0.26 = 0.76 (Start, B-geo, I-geo) = 0.5 + 0.95) = 1.45 If only the activation scores were considered, the most probable sequences would be either (Start, B-geo, O) or (Start, B-geo, B-geo). However, using the transition scores along with the activations means that the sequence with the highest probability is (Start, B-geo, I-geo) in this example. While the forward pass gives the highest score of the entire sequence given the last token, the backward pass process would reconstruct the sequence that resulted in this highest score. This is essentially the Viterbi algorithm, which uses dynamic programming to perform these steps in an efficient manner. Implementing this algorithm is aided by the fact the core computation is provided as a method in the tfa package. This decoding step will be implemented in the call() method of the CRF layer implemented above. Modify this method to look like so: def call(self, inputs, seq_lengths, training=None): if training is None: training = K.learning_phase() # during training, this layer just returns the logits [ 101 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding if training: return inputs # viterbi decode logic to return proper # results at inference _, max_seq_len, _ = inputs.shape seqlens = seq_lengths paths = [] for logit, text_len in zip(inputs, seqlens): viterbi_path, _ = tfa.text.viterbi_decode(logit[:text_len], self.transition_params) paths.append(self.pad_viterbi(viterbi_path, max_seq_len)) return tf.convert_to_tensor(paths) The new lines added have been highlighted. The viterbi_decode() method takes the activations from the previous layers and the transition matrix along with the maximum sequence length to compute the path with the highest score. This score is also returned, but we ignore it for our purposes of inference. This process needs to be performed for each sequence in the batch. Note that this method returns sequences on different lengths. This makes it harder to convert into tensors, so a utility function is used to pad the returned sequences: def pad_viterbi(self, viterbi, max_seq_len): if len(viterbi) < max_seq_len: viterbi = viterbi + [self.mask_id] * \ (max_seq_len - len(viterbi)) return viterbi A dropout layer works completely opposite to the way this CRF layer works. A dropout layer modifies the inputs only during training time. During inference, it merely passes all the inputs through. Our CRF layer works in the exact opposite fashion. It passes the inputs through during training, but it transforms inputs using the Viterbi decoder during inference time. Note the use of the training parameter to control the behavior. Now that the layer is modified and ready, the model needs to be re-instantiated and trained. Post-training, inference can be performed like so: [ 102 ] Chapter 3 Y_test_int = tf.cast(Y_test, dtype=tf.int32) test_dataset = tf.data.Dataset.from_tensor_slices((X_test, Y_test_int)) test_dataset = test_dataset.batch(BATCH_SIZE, drop_remainder=True) out = blc_model.predict(test_dataset.take(1)) This will run inference on a small batch of testing data. Let's check the result for the example sentence: text_tok.sequences_to_texts([X_test[2]]) ['Writing in The Washington Post newspaper , Mr. Ushakov also said it is inadmissible to move in the direction of demonizing Russia . '] As we can see in the highlighted output, the results are better than the actual data! print("Ground Truth: ", ner_tok.sequences_to_texts([tf.argmax(Y_test[2], -1).numpy()])) print("Prediction: ", ner_tok.sequences_to_texts([out[2]])) Ground Truth: ['O O B-org I-org I-org O O B-per B-org O O O O O O O O O O O O B-geo O '] Prediction: ['O O B-org I-org I-org O O B-per I-per O O O O O O O O O O O O B-geo O '] To get a sense of the accuracy of the training, a custom method needs to be implemented. This is shown below: def np_precision(pred, true): # expect numpy arrays assert pred.shape == true.shape assert len(pred.shape) == 2 mask_pred = np.ma.masked_equal(pred, 0) mask_true = np.ma.masked_equal(true, 0) acc = np.equal(mask_pred, mask_true) return np.mean(acc.compressed().astype(int)) [ 103 ] Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding Using numpy's MaskedArray feature, the predictions and labels are compared and converted to an integer array, and the mean is calculated to compute the accuracy: np_precision(out, tf.argmax(Y_test[:BATCH_SIZE], -1).numpy()) 0.9664461247637051 This is a pretty accurate model, just after 5 epochs of training and with very simple architecture, all while using embeddings that are trained from scratch. A recall metric can also be implemented in a similar fashion. A BiLSTM-only model, shown earlier, took 15 epochs of training to get to a similar accuracy! This completes the implementation of an NER model using BiLSTMs and CRFs. If this is interesting and you would like to continue working on this, look for the CoNLL 2003 data set for NER. Even today, papers are being published that aim to improve the accuracy of the models based on that data set. Summary We have covered quite a lot of ground in this chapter. NER and its importance in the industry were explained. To build NER models, BiLSTMs and CRFs are needed. Using BiLSTMs, which we learned about in the previous chapter while building a sentiment classification model, we built a first version of a model that can label named entities. This model was further improved using CRFs. In the process of building these models, we covered the use of the TensorFlow DataSet API. We also built advanced models for CRF mode by building a custom Keras layer, a custom model, custom loss function, and a custom training loop. Thus far, we have trained embeddings for tokens in the models. A considerable amount of lift can be achieved by using pre-trained embeddings. In the next chapter, we'll focus on the concept of transfer learning and the use of pre-trained embeddings like BERT. [ 104 ] 4 Transfer Learning with BERT Deep learning models really shine with large amounts of training data. Having enough labeled data is a constant challenge in the field, especially in NLP. A successful approach that has yielded great results in the last couple of years is that of transfer learning. A model is trained in an unsupervised or semi-supervised way on a large corpus and then fine-tuned for a specific application. Such models have shown excellent results. In this chapter, we will build on the IMDb movie review sentiment analysis and use transfer learning to build models using GloVe (Global Vectors for Word Representation) pre-trained embeddings and BERT (BiDirectional Encoder Representations from Transformers) contextual models. In this chapter, we will cover the following topics: • Overview of transfer learning and use in NLP • Loading pre-trained GloVe embeddings in a model • Building a sentiment analysis model using pre-trained GloVe embeddings and fine-tuning • Overview of contextual embeddings using Attention – BERT • Loading pre-trained BERT models using the Hugging Face library • Using pre-trained and custom BERT-based fine-tuned models for sentiment analysis Transfer learning is a core concept that has made rapid advances in NLP possible. We will discuss transfer learning first. [ 105 ] Transfer Learning with BERT Transfer learning overview Traditionally, a machine learning model is trained for performance on a specific task. It is only expected to work for that task and is not likely to have high performance beyond that task. Let's take the example of the problem of classifying the sentiment of IMDb movie reviews Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs. The model that was trained for this particular task was optimized for performance on this task alone. A separate set of labeled data specific to a different task is required if we wish to train another model. Building another model might not be effective if there isn't enough labeled data for that task. Transfer learning is the concept of learning a fundamental representation of the data that can be adapted to different tasks. In the case of transfer learning, a more abundantly available dataset may be used to distill knowledge and in building a new ML model for a specific task. Through the use of this knowledge, this new ML model can have decent performance even when there is not enough labeled data available for a traditional ML approach to return good results. For this scheme to be effective, there are a few important considerations: • The knowledge distillation step, called pre-training, should have an abundant amount of data available relatively cheaply • Adaptation, often called fine-tuning, should be done with data that shares similarities with the data used for pre-training The figure below illustrates this concept: Figure 4.1: Comparing traditional machine learning with transfer learning [ 106 ] Chapter 4 This technique has been very effective in computer vision. ImageNet is often used as the dataset for pre-training. Specific models are then fine-tuned for a variety of tasks such as image classification, object detection, image segmentation, and pose detection, among others. Types of transfer learning The concepts of domains and tasks underpin the concept of transfer learning. A domain represents a specific area of knowledge or data. News articles, social media posts, medical records, Wikipedia entries, and court judgments could be considered examples of different domains. A task is a specific objective or action within a domain. Sentiment analysis and stance detection of tweets are specific tasks in the social media posts domain. Detection of cancer and fractures could be different tasks in the domain of medical records. Different types of transfer learning have different combinations of source and target domains and tasks. Three main types of transfer learning, namely domain adaptation, multi-task learning, and sequential learning, are described below. Domain adaptation In this setting, the domains of source and target tasks are usually the same. However, the differences are related to the distribution of training and testing data. This case of transfer learning is related to a fundamental assumption in any machine learning task – the assumption that training and testing data are i.i.d. The first i stands for independent, which implies that each sample is independent of the others. In practice, this assumption can be violated when there are feedback loops, like in recommendation systems. The second section is i.d., which stands for identically distributed and implies that the distribution of labels and other characteristics between training and test samples is the same. Suppose the domain was animal photos, and the task was identifying cats in the photos. This task can be modeled as a binary classification problem. The identically distributed assumption implies that the distribution of cats in the photos between training and test samples is similar. This also implies that characteristics of photos, such as resolutions, lighting conditions, and orientations, are very similar. In practice, this assumption is also frequently violated. There is a case about a very early perceptron model built to identify tanks in the woods. The model was performing quite well on the training set. When the test set was expanded, it was discovered that all the pictures of tanks in woods were taken on sunny days, whereas the pictures of woods without tanks were taken on a cloudy day. [ 107 ] Transfer Learning with BERT In this case, the network learned to differentiate sunny and cloudy conditions more than the presence or absence of tanks. During testing, the pictures supplied were from a different distribution, but the same domain, which led to the model failing. Dealing with similar situations is called domain adaptation. There are many techniques for domain adaptation, one of which is data augmentation. In computer vision, images in the training set can be cropped, warped, or rotated, and varying amounts of exposure or contrast or saturation can be applied to them. These transformations would increase the training data and could mitigate the gap between training and potential testing data. Similar techniques are used in speech and audio by adding random noises, including street sounds or background chatter, to an audio sample. Domain adaptation techniques are well known in traditional machine learning with several resources already available on it. However, what makes transfer learning exciting is using data from a different source domain or task for pre-training results in improvements in model performance on a different task or domain. There are two types of transfer learning in this area. The first one is multi-task learning, and the second one is sequential learning. Multi-task learning In multi-task learning, data from different but related tasks are passed through a set of common layers. Then, there may be task-specific layers on the top that learn about a particular task objective. Figure 4.2 shows the multi-task learning setting: Figure 4.2: Multi-task transfer learning [ 108 ] Chapter 4 The output of these task-specific layers would be evaluated on different loss functions. All the training examples for all the tasks are passed through all the layers of the model. The task-specific layers are not expected to do well for all the tasks. The expectation is that the common layers learn some of the underlying structure that is shared by the different tasks. This information about structure provides useful signals and improves the performance of all the models. The data for each task has many features. However, these features may be used to construct representations that can be useful in other related tasks. Intuitively, people learn some elementary skills before mastering more complex skills. Learning to write requires first becoming skilled in holding a pen or pencil. Writing, drawing, and painting can be considered different tasks that share a standard "layer" of holding a pen or pencil. The same concept applies while learning a new language where the structure and grammar of one language may help with learning a related language. Learning Latin-based languages like French, Italian, and Spanish becomes more comfortable if one of the other Latin languages is known, as these languages share word roots. Multi-task learning increases the amount of data available for training by pooling data from different tasks together. Further, it forces the network to generalize better by trying to learn representations that are common across tasks in shared layers. Multi-task learning is a crucial reason behind the recent success of models such as GPT-2 and BERT. It is the most common technique used for pre-training models that are then used for specific tasks. Sequential learning Sequential learning is the most common form of transfer learning. It is named so because it involves two simple steps executed in sequence. The first step is pretraining and the second step is fine-tuning. These steps are shown in Figure 4.3: Figure 4.3: Sequential learning [ 109 ] Transfer Learning with BERT The first step is to pre-train a model. The most successful pre-trained models use some form of multi-task learning objectives, as depicted on the left side of the figure. A portion of the model used for pre-training is then used for different tasks shown on the right in the figure. This reusable part of the pre-trained model depends on the specific architecture and may have a different set of layers. The reusable partition shown in Figure 4.3 is just illustrative. In the second step, the pre-trained model is loaded and added as the starting layer of a task-specific model. The weights learned by the pre-trained model can be frozen during the training of the task-specific model, or those weights can be updated or fine-tuned. When the weights are frozen, then this pattern of using the pre-trained model is called feature extraction. Generally, fine-tuning gives better performance than a feature extraction approach. However, there are some pros and cons to both approaches. In fine-tuning, not all weights get updated as the task-specific training data may be much smaller in size. If the pre-trained model is an embedding for words, then other embeddings can become stale. If the task is such that it has a small vocabulary or has many out-ofvocabulary words, then this can hurt the performance of the model. Generally, if the source and target tasks are similar, then fine-tuning would produce better results. An example of such a pre-trained model is Word2vec, which we saw in Chapter 1, Essentials of NLP. There is another model of generating word-level embeddings called GloVe or Global Vectors for Word Representation, introduced in 2014 by researchers from Stanford. Let's take a practical tour of transfer learning by re-building the IMDb movie sentiment analysis using GloVe embeddings in the next section. After that, we shall take a tour of BERT and apply BERT in the same sequential learning setting. IMDb sentiment analysis with GloVe embeddings In Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, a BiLSTM model was built to predict the sentiment of IMDb movie reviews. That model learned embeddings of the words from scratch. This model had an accuracy of 83.55% on the test set, while the SOTA result was closer to 97.4%. If pre-trained embeddings are used, we expect an increase in model accuracy. Let's try this out and see the impact of transfer learning on this model. But first, let's understand the GloVe embedding model. [ 110 ] Chapter 4 GloVe embeddings In Chapter 1, Essentials of NLP, we discussed the Word2Vec algorithm, which is based on skip-grams with negative sampling. words appear more frequently in the text compared to other words. Due to this difference in frequencies of occurrence, training data for some words may be more common than other words. Beyond this part, Word2Vec does not use these statistics of co-occurrence in any way. GloVe takes these frequencies into account and posits that the co-occurrences provide vital information. The Global part of the name refers to the fact that the model considers these co-occurrences over the entire corpus. Rather than focus on the probabilities of co-occurrence, GloVe focuses on the ratios of co-occurrence considering probe words. In the paper, the authors take the example of the words ice and steam to illustrate the concept. Let's say that solid is another word that is going to be used to probe the relationship between ice and steam. A probability of occurrence of solid given steam is psolid|steam. Intuitively, we expect this probability to be small. Conversely, the probability of occurrence of solid with ice is represented by psolid|ice and is expected is computed, we expect this value to be significant. If the same ratio is computed with the probe word being gas, the opposite behavior would be expected. In cases where both are equally probable, either due to the probe word being unrelated, or equally probable to occur with the two words, then the ratio should be closer to 1. An example of a probe word close to both ice and steam is water. An example of a word unrelated to ice or steam is fashion. GloVe ensures that this relationship is factored into the embeddings generated for the words. It also has optimizations for rare co-occurrences, numerical stability issues computation, and others. to be large. If Now let us see how to use these pre-trained embeddings for predicting sentiment. The first step is to load the data. The code here is identical to the code used in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs; it's provided here for the sake of completeness. All the code for this exercise is in the file imdb-transferlearning.ipynb located in the chapter4-Xfer-learning-BERT directory in GitHub. [ 111 ] Transfer Learning with BERT Loading IMDb training data TensorFlow Datasets or the tfds package will be used to load the data: import import import import tensorflow as tf tensorflow_datasets as tfds numpy as np pandas as pd imdb_train, ds_info = tfds.load(name="imdb_reviews", split="train", with_info=True, as_supervised=True) imdb_test = tfds.load(name="imdb_reviews", split="test", as_supervised=True) Note that the additional 50,000 reviews that are unlabeled are ignored for the purpose of this exercise. After the training and test sets are loaded as shown above, the content of the reviews needs to be tokenized and encoded: # Use the default tokenizer settings tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() MAX_TOKENS = 0 for example, label in imdb_train: some_tokens = tokenizer.tokenize(example.numpy()) if MAX_TOKENS < len(some_tokens): MAX_TOKENS = len(some_tokens) vocabulary_set.update(some_tokens) The code shown above tokenizes the review text and constructs a vocabulary. This vocabulary is used to construct a tokenizer: imdb_encoder = tfds.features.text.TokenTextEncoder(vocabulary_set, lowercase=True, tokenizer=tokenizer) vocab_size = imdb_encoder.vocab_size print(vocab_size, MAX_TOKENS) 93931 2525 [ 112 ] Chapter 4 Note that text was converted to lowercase before encoding. Converting to lowercase helps reduce the vocabulary size and may benefit the lookup of corresponding GloVe vectors later on. Note that capitalization may contain important information, which may help in tasks such as NER, which we covered in previous chapters. Also note that all languages do not distinguish between capital and small letters. Hence, this particular transformation should be applied after due consideration. Now that the tokenizer is ready, the data needs to be tokenized, and sequences padded to a maximum length. Since we are interested in comparing performance with the model trained in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, we can use the same setting of sampling a maximum of 150 words of the review. The following convenience methods help in performing this task: # transformation functions to be used with the dataset]) label.set_shape([]) return encoded, label Finally, the data is encoded using the convenience functions above like so: encoded_train = imdb_train.map(encode_tf_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) encoded_test = imdb_test.map(encode_tf_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) [ 113 ] Transfer Learning with BERT At this point, all the training and test data is ready for training. Note that in limiting the size of the reviews, only the first 150 tokens will be counted for a long review. Typically, the first few sentences of the review have the context or description, and the latter part of the review has the conclusion. By limiting to the first part of the review, valuable information could be lost. The reader is encouraged to try a different padding scheme where tokens from the first part of the review are dropped instead of the second part and observe the difference in the accuracy. The next step is the foremost step in transfer learning – loading the pre-trained GloVe embeddings and using these as the weights of the embedding layer. Loading pre-trained GloVe embeddings First, the pre-trained embeddings need to be downloaded and unzipped: # Download the GloVe embeddings !wget !unzip glove.6B.zip Archive: glove.6B.zip inflating: glove.6B.50d.txt inflating: glove.6B.100d.txt inflating: glove.6B.200d.txt inflating: glove.6B.300d.txt Note that this is a huge download of over 800 MB, so this step may take some time to execute. Upon unzipping, there will be four different files, as shown in the output above. Each file has a vocabulary of 400,000 words. The main difference is the dimensions of embeddings generated. In the previous chapter, an embedding dimension of 64 was used for the model. The nearest GloVe dimension is 50, so let's use that. The file format is quite simple. Each line of the text has multiple values separated by spaces. The first item of each row is the word, and the rest of the items are the values of the vector for each dimension. So, in the 50-dimensional file, each row will have 51 columns. These vectors need to be loaded up in memory: dict_w2v = {} with open('glove.6B.50d.txt', "r") as file: for line in file: [ 114 ] Chapter 4 tokens = line.split() word = tokens[0] vector = np.array(tokens[1:], dtype=np.float32) if vector.shape[0] == 50: dict_w2v[word] = vector else: print("There was an issue with " + word) # let's check the vocabulary size print("Dictionary Size: ", len(dict_w2v)) Dictionary Size: 400000 If the code processed the file correctly, you shouldn't see any errors and you should see a dictionary size of 400,000 words. Once these vectors are loaded, an embedding matrix needs to be created. Creating a pre-trained embedding matrix using GloVe So far, we have a dataset, its vocabulary, and a dictionary of GloVe words and their corresponding vectors. However, there is no correlation between these two vocabularies. The way to connect them is through the creation of an embedding matrix. First, let's initialize an embedding matrix of zeros: embedding_dim = 50 embedding_matrix = np.zeros((imdb_encoder.vocab_size, embedding_dim)) Note that this is a crucial step. When a pre-trained word list is used, finding a vector for each word in the training/test is not guaranteed. Recall the discussion on transfer learning earlier, where the source and target domains are different. One way this difference manifests itself is through having a mismatch in tokens between the training data and the pre-trained model. As we go through the next steps, this will become more apparent. After this embedding matrix of zeros is initialized, it needs to be populated. For each word in the vocabulary of reviews, the corresponding vector is retrieved from the GloVe dictionary. [ 115 ] Transfer Learning with BERT The ID of the word is retrieved using the encoder, and then the embedding matrix entry corresponding to that entry is set to the retrieved vector: unk_cnt = 0 unk_set = set() for word in imdb_encoder.tokens: embedding_vector = dict_w2v.get(word) if embedding_vector is not None: tkn_id = imdb_encoder.encode(word)[0] embedding_matrix[tkn_id] = embedding_vector else: unk_cnt += 1 unk_set.add(word) # Print how many weren't found print("Total unknown words: ", unk_cnt) Total unknown words: 14553 During the data loading step, we saw that the total number of tokens was 93,931. Out of these, 14,553 words could not be found, which is approximately 15% of the tokens. For these words, the embedding matrix will have zeros. This is the first step in transfer learning. Now that the setup is completed, we will need to use TensorFlow to use these pre-trained embeddings. There will be two different models that will be tried – the first will be based on feature extraction and the second one on fine-tuning. Feature extraction model As discussed earlier, the feature extraction model freezes the pre-trained weights and does not update them. An important issue with this approach in the current setup is that there are a large number of tokens, over 14,000, that have zero embedding vectors. These words could not be matched to an entry in the GloVe word list. [ 116 ] Chapter 4 To minimize the chances of not finding matches between the pre-trained vocabulary and task-specific vocabulary, ensure that similar tokenization schemes are used. GloVe uses a wordbased tokenization scheme like the one provided by the Stanford tokenizer. As seen in Chapter 1, Essentials of NLP, this works better than a whitespace tokenizer, which is used for the training data above. We see 15% unmatched tokens due to different tokenizers. As an exercise, the reader can implement the Stanford tokenizer and see the reduction in unknown tokens. Newer methods like BERT use parts of subword tokenizers. Subword tokenization schemes can break up words into parts, which minimizes this chance of mismatch in tokens. Some examples of subword tokenization schemes are Byte Pair Encoding (BPE) or WordPiece tokenization. The BERT section of this chapter explains subword tokenization schemes in more detail. If pre-trained vectors were not used, then the vectors for all the words would start with nearly zero and get trained through gradient descent. In this case, the vectors are already trained, so we expect the training to go along much faster. For a baseline, one epoch of training of the BiLSTM model while training embeddings takes between 65 seconds and 100 seconds, with most values around 63 seconds on an Ubuntu machine with an i5 processor and an Nvidia RTX-2070 GPU. Now, let's build the model and plug in the embedding matrix generated above into the model. Some basic parameters need to be set up: # Length of the vocabulary in chars vocab_size = imdb_encoder.vocab_size # len(chars) # Number of RNN units rnn_units = 64 #batch size BATCH_SIZE=100 [ 117 ] Transfer Learning with BERT A convenience function being set up will enable fast switching. This method enables building models with the same architecture but different hyperparameters: from tensorflow.keras.layers import Embedding, LSTM, \ Bidirectional, Dense def build_model_bilstm(vocab_size, embedding_dim, rnn_units, batch_size, train_emb=False): model = tf.keras.Sequential([ Embedding(vocab_size, embedding_dim, mask_zero=True, weights=[embedding_matrix], trainable=train_emb), Bidirectional(LSTM(rnn_units, return_sequences=True, dropout=0.5)), Bidirectional(LSTM(rnn_units, dropout=0.25)), Dense(1, activation='sigmoid') ]) return model The model is identical to what was used in the previous chapter with the exception of the highlighted code pieces above. First, a flag can now be passed to this method that specifies whether the embeddings should be trained further or frozen. This parameter is set to false as it's the default value. The second change is in the definition of the Embedding layer. A new parameter, weights, loads the embedding matrix as the weights for the layer. Just after this parameter, a Boolean parameter called trainable is passed that determines whether the weights of this layer should be updated during training time. A feature extraction-based model can now be created like so: model_fe = build_model_bilstm( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) model_fe.summary() Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_5 (Embedding) (None, None, 50) 4696550 _________________________________________________________________ bidirectional_6 (Bidirection (None, None, 128) 58880 [ 118 ] Chapter 4 _________________________________________________________________ bidirectional_7 (Bidirection (None, 128) 98816 _________________________________________________________________ dense_5 (Dense) (None, 1) 129 ================================================================= Total params: 4,854,375 Trainable params: 157,825 Non-trainable params: 4,696,550 _________________________________________________________________ This model has about 4.8 million trainable parameters. It should be noted that this model is considerably smaller than the previous BiLSTM model, which had over 12 million parameters. A simpler or smaller model will train faster and possibly be less likely to overfit as the model capacity is lower. This model needs to be compiled with the loss function, optimizer, and metrics for observation progress of the model. Binary cross-entropy is the right loss function for this problem of binary classification. The Adam optimizer is a decent choice in most cases. Adaptive Moment Estimation or Adam Optimizer The simplest optimization algorithm used in backpropagation for the training of deep neural networks is mini-batch Stochastic Gradient Descent (SGD). Any error in the prediction is propagated back and weights, called parameters, of the various units are adjusted according to the error. Adam is a method that eliminates some of the issues of SGD such as getting trapped in sub-optimal local optima, and having the same learning rate for each parameter. Adam computes adaptive learning rates for each parameter and adjusts them based on not only the error but also previous adjustments. Consequently, Adam converges much faster than other optimization methods and is recommended as the default choice. The metrics that will be observed are the same as before, accuracy, precision, and recall: model_fe.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'Precision', 'Recall']) [ 119 ] Transfer Learning with BERT After setting up batches for preloading, the model is ready for training. Similar to previously, the model will be trained for 10 epochs: # Prefetch for performance encoded_train_batched = encoded_train.batch(BATCH_SIZE).prefetch(100) model_fe.fit(encoded_train_batched, epochs=10) Epoch 1/10 250/250 [==============================] - 28s 113ms/step - loss: 0.5896 - accuracy: 0.6841 - Precision: 0.6831 - Recall: 0.6870 Epoch 2/10 250/250 [==============================] - 17s 70ms/step - loss: 0.5160 - accuracy: 0.7448 - Precision: 0.7496 - Recall: 0.7354 ... Epoch 9/10 250/250 [==============================] - 17s 70ms/step - loss: 0.4108 - accuracy: 0.8121 - Precision: 0.8126 - Recall: 0.8112 Epoch 10/10 250/250 [==============================] - 17s 70ms/step - loss: 0.4061 - accuracy: 0.8136 - Precision: 0.8147 - Recall: 0.8118 A few things can be seen immediately. The model trained significantly faster. Each epoch took approximately 17 seconds with a maximum of 28 seconds for the first epoch. Secondly, the model has not overfit. The final accuracy is just over 81% on the training set. In the previous setup, the accuracy on the training set was 99.56%. It should also be noted that the accuracy was still increasing at the end of the tenth epoch, with lots of room to go. This indicates that training this model for longer would probably increase accuracy further. Quickly changing the number of epochs to 20 and training the model yields an accuracy of just over 85% on the testing set, with precision at 80% and recall at 92.8%. For now, let's understand the utility of this model. To make an assessment of the quality of this model, performance on the test set should be evaluated: model_fe.evaluate(encoded_test.batch(BATCH_SIZE)) 250/Unknown - 21s 85ms/step - loss: 0.3999 - accuracy: 0.8282 Precision: 0.7845 - Recall: 0.9050 [ 120 ] Chapter 4 Compared to the previous model's accuracy of 83.6% on the test set, this model produces an accuracy of 82.82%. This performance is quite impressive because this model is just 40% of the size of the previous model and represents a 70% reduction in training time for a less than 1% drop in accuracy. This model has a slightly better recall for slightly worse accuracy. This result should not be entirely unexpected. There are over 14,000 word vectors that are zeros in this model! To fix this issue, and also to try the fine-tuning sequential transfer learning approach, let's build a fine-tuning-based model. Fine-tuning model Creating the fine-tuning model is trivial when using the convenience function. All that is needed is to pass the train_emb parameter as true: model_ft = build_model_bilstm( vocab_size=vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE, train_emb=True) model_ft.summary() This model is identical to the feature extraction model in size. However, since the embeddings will be fine-tuned, training is expected to take a little longer. There are several thousand zero embeddings, which can now be updated. The resulting accuracy is expected to be much better than the previous model. The model is compiled with the same loss function, optimizer, and metrics, and trained for 10 epochs: model_ft.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'Precision', 'Recall']) model_ft.fit(encoded_train_batched, epochs=10) Epoch 1/10 250/250 [==============================] - 35s 139ms/step - loss: 0.5432 - accuracy: 0.7140 - Precision: 0.7153 - Recall: 0.7111 Epoch 2/10 250/250 [==============================] - 24s 96ms/step - loss: 0.3942 - accuracy: 0.8234 - Precision: 0.8274 - Recall: 0.8171 ... [ 121 ] Transfer Learning with BERT Epoch 9/10 250/250 [==============================] - accuracy: 0.9521 - Precision: 0.9530 Epoch 10/10 250/250 [==============================] - accuracy: 0.9580 - Precision: 0.9583 - - 24s 97ms/step - loss: 0.1303 Recall: 0.9511 - 24s 96ms/step - loss: 0.1132 Recall: 0.9576 This accuracy is very impressive but needs to be checked against the test set: model_ft.evaluate(encoded_test.batch(BATCH_SIZE)) 250/Unknown - 22s 87ms/step - loss: 0.4624 - accuracy: 0.8710 Precision: 0.8789 - Recall: 0.8605 That is the best result we have obtained so far at an accuracy of 87.1%. Data about state-of-the-art results on datasets are maintained by the paperswithcode. com website. Research papers that have reproducible code are featured on the leaderboards for datasets. This result would be about seventeenth on the SOTA result on the paperswithcode.com website at the time of writing! It can also be seen that the network is overfitting a little bit. A Dropout layer can be added between the Embedding layer and the first LSTM layer to help reduce this overfitting. It should also be noted that this network is still much faster than training embeddings from scratch. Most epochs took 24 seconds for training. Overall, this model is smaller in size, takes much less time to train, and has much higher accuracy! This is why transfer learning is so important in machine learning in general and NLP more specifically. So far, we have seen the use of context-free word embeddings. The major challenge with this approach is that a word could have multiple meanings depending on the context. The word bank could refer to a place for storing money and valuables and also the side of a river. A more recent innovation in this area is BERT, published in May 2019. The next step in improving the accuracy of movie review sentiment analysis is to use a pre-trained BERT model. The next section explains the BERT model, its vital innovations, and the impact of using this model for the task at hand. Please note that the BERT model is enormous! If you do not have adequate local computing resources, using Google Colab with a GPU accelerator would be an excellent choice for the next section. [ 122 ] Chapter 4 BERT-based transfer learning Embeddings like GloVe are context-free embeddings. Lack of context can be limiting in NLP contexts. As discussed before, the word bank can mean different things depending on the context. Bi-directional Encoder Representations from Transformers, or BERT, came out of Google Research in May 2019 and demonstrated significant improvements on baselines. The BERT model builds on several innovations that came before it. The BERT paper also introduces several innovations of ERT works. Two foundational advancements that enabled BERT are the encoder-decoder network architecture and the Attention mechanism. The Attention mechanism further evolved to produce the Transformer architecture. The Transformer architecture is the fundamental building block of BERT. These concepts are covered next and detailed further in later chapters. After these two sections, we will discuss specific innovations and structures of the BERT model. Encoder-decoder networks We have seen the use of LSTMs and BiLSTMs on sentences modeled as sequences of words. These sequences can be of varying lengths as sentences are composed of a different number of words. Recall that in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, we discussed the core concept of an LSTM being a unit unrolled in time. For each input token, the LSTM unit generated an output. Consequently, the number of outputs produced by the LSTM depends on the number of input tokens. All of these input tokens are combined through a TimeDistributed() layer for use by later Dense() layers in the network. The main issue is that the input and output sequence lengths are linked. This model cannot handle variable-length sequences effectively. Translation-type tasks where the input and the output may have different lengths, consequently, won't do well with this architecture. The solution to these challenges was posed in a paper titled Sequence to Sequence Learning with Neural Networks written by Ilya Sutskever et al. in 2014. This model is also referred to as the seq2seq model. [ 123 ] Transfer Learning with BERT The basic idea is shown in the figure below: Figure 4.4: Encoder-decoder network The model is divided into two parts – an encoder and a decoder. A special token that denotes the end of the input sequence is appended to the input sequence. Note that now the input sequence can have any length as this end of sentence token, (EOS) in the figure above, denotes the end. In the figure above, the input sequence is denoted by tokens (I1, I2, I3,…). Each input token, after vectorization, is passed to an LSTM model. The output is only collected from the last (EOS) token. The vector generated by the encoder LSTM network for the (EOS) token is a representation of the entire input sequence. It can be thought of as a summary of the entire input. A variablelength sequence has not been transformed into a fixed-length or dimensional vector. This vector becomes the input to the decoder layer. The model is auto-regressive in the sense that the output generated by the previous step of the decoder is fed into the next step as input. Output generation continues until the special (EOS) token is generated. This scheme allows the model to determine the length of the output sequence. It breaks apart the dependency between the length of the input and output sequences. Conceptually, this is a straightforward model to understand. However, this is a potent model. Many tasks can be cast as a sequence-to-sequence problem. [ 124 ] Chapter 4 Some examples include translating a sentence from one language to another, summarizing an article where the input sequence is the text of the article and the output sequence is the summary, or question-answering where the question is the input sequence and the output is the answer. Speech recognition is a sequence-tosequence problem with input sequences of 10 ms samples of voice, and the output is text. At the time of its release, it garnered much attention because it had a massive impact on the quality of Google Translate. In nine months of work using this model, the team behind the seq2seq model was able to provide much higher performance than that after over 10 years of improvements in Google Translate. The Great A.I. Awakening The New York Times published a fantastic article with the above title in 2016 that documents the journey of deep learning and especially the authors of the seq2seq paper and its dramatic effect on the quality of Google Translate. This article is highly recommended to see how transformational this architecture was for NLP. This article is available at. com/2016/12/14/magazine/the-great-ai-awakening.html. With these techniques at hand, the next innovation was the use of the Attention mechanism, which allows the modeling of dependencies between tokens irrespective of their distance. The Attention model became the cornerstone of the Transformer model, described in the next section. Attention model In the encoder-decoder model, the encoder part of the network creates a fixed dimensional representation of the input sequence. As the input sequence length grows, more and more of the input is compressed into this vector. The encodings or hidden states generated by processing the input tokens are not available to the decoder layer. The encoder states are hidden from the decoder. The Attention mechanism allows the decoder part of the network to see the encoder hidden states. These hidden states are depicted in Figure 4.4 as the output of each of the input tokens, (I1, I2, I3,…), but shown only as feeding in to the next input token. In the Attention mechanism, these input token encodings will also be made available to the decoder layer. This is called General Attention, and it refers to the ability of output tokens to directly have a dependence on the encodings or hidden states of input tokens. The main innovation here is the decoder operates on a sequence of vectors generating by encoding the input rather than one fixed vector generated at the end of the input. The Attention mechanism allows the decoder to focus its attention on a subset of the encoded input vectors while decoding, hence the name. [ 125 ] Transfer Learning with BERT There is another form of attention, called self-attention. Self-attention enables connections between different encodings of input tokens in different positions. As depicted in the model in Figure 4.4, an input token only sees the encoding of the previous token. Self-attention will allow it to look at the encodings of previous tokens. Both forms are an improvement to the encoder-decoder architecture. While there are many Attention architectures, a prevalent form is called Bahdanau Attention. It is named after the first author of the paper, published in 2016, where this Attention mechanism was proposed. Building on the encoder-decoder network, this form enables each output state to look at the encoded inputs and learn some weights for each of these inputs. Consequently, each output could focus on different input tokens. An illustration of this model is shown in Figure 4.5, which is a modified version of Figure 4.4: Figure 4.5: Bahdanau Attention architecture [ 126 ] Chapter 4 Two specific changes have been made in the Attention mechanism when compared to the encoder-decoder architecture. The first change is in the encoder. The encoder layer here uses BiLSTMs. The use of BiLSTMs allows each word to learn from the words preceding and succeeding them both. In the standard encoder-decoder architecture, LSTMs were used, which meant each input word could only learn from the words before it. The second change is related to how the decoder uses the output of the encoders. In the previous architecture, only the output of the last token, the end-of-sentence token, used the summary of the entire input sequence. In the Bahdanau Attention architecture, the hidden state output of each input token is multiplied by an alignment weight that represents the degree of match between the input token at a specific position with the output token in question. A context vector is computed by multiplying each input hidden state output with the corresponding alignment weight and concatenating all the results. This context vector is fed to the output token along with the previous output token. Figure 4.5 shows this computation, for only the second output token. This alignment model with the weights for each output token can help point to the most helpful input tokens in generating that output token. Note that some of the details have been simplified for brevity and can be found in the paper. We will implement Attention from scratch in later chapters. Attention is not an explanation It can be tempting to interpret the alignment scores or attention weights as an explanation of the model predicting a particular output token. A paper with the title of this information box was published that tests this hypothesis that Attention is an explanation. The conclusion from the research is that Attention should not be interpreted as an explanation. Different attention weights on the same set of inputs may result in the same outputs. The next advancement to the Attention model came in the form of the Transformer architecture in 2017. The Transformer model is the key to the BERT architecture, so let's understand that next. [ 127 ] Transfer Learning with BERT Transformer model Vaswani et al. published a ground-breaking paper in 2017 titled Attention Is All You Need. This paper laid the foundation of the Transformer model, which has been behind most of the recent advanced models such as ELMo, GPT, GPT-2, and BERT. The transformer model is built on the Attention model by taking the critical innovation from it – enabling the decoder to see all of the input hidden states while getting rid of the recurrence in it, which makes the model slow to train due to the sequential nature of processing the input sequences. The Transformer model has an encoder and a decoder part. This encoder-decoder structure enables it to perform best on machine translation-type tasks. However, not all tasks need full encoder and decoder layers. BERT only uses the encoder part, while generative models like GPT-2 use the decoder part. In this section, only the encoder part of the architecture is covered. The next chapter deals with the generation of text and the best models that use the Transformer decoder. Hence, the decoder will be covered in that chapter. What is a Language Model? A Language Model (LM) task is traditionally defined as predicting the next word in a sequence of words. LMs are particularly useful for text generation, but less for classification. GPT-2 is an example of a model that fits this definition of an LM. Such a model only has context from the words or tokens that have occurred on its left (reverse for a right-to-left language). This is a trade-off that is appropriate in the generation of text. However, in other tasks such as question-answering or translation, the full sentence should be available. In such a case, using a bi-directional model that can use the context from both sides is useful. BERT is such a model. It loses the auto-regression property in favor of gaining context from both sides of a word of the token. An encoder block of the Transformer has sub-layers parts – the multi-head selfattention sub-layer and a feed-forward sub-layer. The self-attention sub-layer looks at all the words of the input sequence and generates an encoding for these words in the context of each other. The feed-forward sublayer is composed of two layers using linear transformations and a ReLU activation in between. Each encoder block is composed of these two sub-layers, while the entire encoder is composed of six such blocks, as shown in Figure 4.6: [ 128 ] Chapter 4 Figure 4.6: Transformer encoder architecture [ 129 ] Transfer Learning with BERT A residual connection around the multi-head attention block and the feed-forward block is made in each encoder block. While adding the output of the sublayer with the input it received, layer normalization is performed. The main innovation here is the Multi-Head Attention block. There are eight identical attention blocks whose outputs are concatenated to produce the multi-head attention output. Each attention block takes in the encoding and defines three new vectors called the query, key, and value vectors. Each of these vectors is defined as 64-dimensional, though this size is a hyperparameter that can be tuned. The query, key, and value vectors are learned through training. To understand how this works, let's assume that the input has three tokens. Each token has a corresponding embedding. Each of these tokens is initialized with its query, key, and value vectors. A weight vector is also initialized, which, when multiplied with the embedding of the input token, produces the key for that token. After the query vector is computed for a token, it is multiplied by the key vectors of all the input tokens. Note that the encoder has access to all the inputs, on both sides of each token. As a result, a score has now been computed by taking the query vector of the word in question and the value vector of all the tokens in the input sequence. All of these scores are passed through a softmax. The result can be interpreted as providing a sense of which tokens of the input are important to this particular input token. In a way, the input token in question is attentive to the other tokens with a high softmax score. This score is expected to be high when the input token attends to itself but can be high for other tokens as well. Next, this softmax score is multiplied by the value vector of each token. All these value vectors of the different input tokens are then summed up. Value vectors of tokens with higher softmax scores will have a higher contribution to the output value vector of the input token in question. This completes the calculation of the output for a given token in the Attention layer. Multi-head self-attention creates multiple copies of the query, key, and value vectors along with the weights matrix used to compute the query from the embedding of the input token. The paper proposed eight heads, though this could be experimented with. An additional weight matrix is used to combine the multiple outputs of each of the heads and concatenate them together into one output value vector. This output value vector is fed to the feed-forward layer, and the output of the feedforward layer goes to the next encoder block or becomes the output of the model at the final encoder block. While the core BERT model is essentially the core Transformer encoder model, there are a few specific enhancements it introduced that are covered next. Note that using the BERT model is much easier as all of these details are abstracted. Knowing these details may, however, help in understanding BERT inputs and outputs. The code to use BERT for the IMDb sentiment analysis follows the next section. [ 130 ] Chapter 4 The bidirectional encoder representations from transformers (BERT) model The emergence of the Transformer architecture was a seminal moment in the NLP world. This architecture has driven a lot of innovation through several derivative architectures. BERT is one such model. It was released in 2018. The BERT model only uses the encoder part of the Transformer architecture. The layout of the encoder is identical to the one described earlier with twelve encoder blocks and twelve attention heads. The size of the hidden layers is 768. These sets of parameters are referred to as BERT Base. These hyperparameters result in a total model size of 110 million parameters. A larger model was also published with 24 encoder blocks, 16 attention heads, and a hidden unit size of 1,024. Since the paper came out, a number of different variants of BERT like ALBERT, DistilBERT, RoBERTa, CamemBERT, and so on have also emerged. Each of these models has tried to improve the BERT performance in terms of accuracy or in terms of training/inference time. The way BERT is pre-trained is unique. It uses the multi-task transfer learning principle explained above to pre-train on two different objectives. The first objective is the Masked Language Model (MLM) task. In this task, some of the input tokens are masked randomly. The model has to predict the right token given the tokens on both sides of the masked token. Specifically, a token in the input sequence is replaced with a special [MASK] token 80% of the time. In 10% of the cases, the selected token is replaced with another random token from the vocabulary. In the last 10% of the cases, the token is kept unchanged. Further, this happens for 15% of the overall tokens in a batch. The consequence of this scheme is that the model cannot rely on certain tokens being present and is forced to learn a contextual representation based on the distribution of the tokens before and after any given token. Without this masking, the bidirectional nature of the model means each word would be able to indirectly see itself from either direction. This would make the task of predicting the target token really easy. The second objective the model is pre-trained on is Next Sentence Prediction (NSP). The intuition here is that there are many NLP tasks that deal with pairs of sentences. For example, a question-answering problem can model the question as the first sentence, and the passage to be used to answer the question becomes the second sentence. The output from the model may be a span identifier that identifies the start and end token indices in the passage provided as the answer to the question. In the case of sentence similarity or paraphrasing, both sentence pairs can be passed in to get a similarity score. The NSP model is trained by passing in sentence pairs with a binary label that indicates whether the second sentence follows the first sentence. 50% of the training examples are passed as actual next sentences from the corpus with the label IsNext, while in the other 50% a random sentence is passed with the output label NotNext. [ 131 ] Transfer Learning with BERT BERT also addresses a problem we saw in the GloVe example above – out-ofvocabulary tokens. About 15% of the tokens were not in the vocabulary. To address this problem, BERT uses the WordPiece tokenization scheme with a vocabulary size of 30,000 tokens. Note that this is much smaller than the GloVe vocabulary size. WordPiece belongs to a class of tokenization schemes called subword tokenization. Other members of this class are Byte Pair Encoding (BPE), SentencePiece, and the Unigram language model. Inspiration for the WordPiece model came from the Google Translate team working with Japanese and Korean texts. If you recall the discussion on tokenization in the first chapter, we showed that the Japanese language does not use spaces for delimiting words. Hence, it is hard to tokenize it into words. Methods developed for creating vocabularies for such languages are quite useful for applying to languages like English and keeping the dictionary size down to a reasonable size. Consider the German translation of the phrase Life Insurance Company. This would translate to Lebensversicherungsgesellschaft. Similarly, Gross Domestic Product would translate to Bruttoinlandsprodukt. If words are taken as such, the size of the vocabulary would be very large. A subword approach could represent these words more efficiently. A smaller dictionary reduces training time and memory requirements. If a smaller dictionary does not come at the cost of out-of-vocabulary tokens, then it is quite useful. To help understand the concept of subword tokenization, consider an extreme example where the tokenization breaks apart the work into individual characters and numbers. The size of this vocabulary would be 37 – with 26 alphabets, 10 numbers, and space. An example of a subword tokenization scheme is to introduce two new tokens, -ing and -tion. Every word that ends with these two tokens can be broken into two subwords – the part before the suffix and one of the two suffixes. This can be done through knowledge of the language grammar and constructs, using techniques such as stemming and lemmatization. The WordPiece tokenization approach used in BERT is based on BPE. In BPE, the first step is defining a target vocabulary size. Next, the entire text is converted to a vocabulary of just the individual character tokens and mapped to the frequency of occurrence. Now multiple passes are made on this to combine pairs of tokens so as to maximize the frequency of the bigram created. For each subword created, a special token is added to denote the end of the word so that detokenization can be performed. Further, if the subword is not the start of the word, a ## tag is added to help in reconstructing the original words. This process is continued until the desired vocabulary is hit, or the base condition of a minimum frequency of 1 is hit for tokens. BPE maximizes the frequency, and WordPiece builds on top of this to include another objective. [ 132 ] Chapter 4 The objective for WordPiece includes increasing mutual information by considering the frequencies of the tokens being merged along with the frequency of the merged bigram. This introduces a minor adjustment to the model. RoBERTa from Facebook experimented with using a BPE model and did not see a material difference in performance. The GPT-2 generative model is based on the BPE model. To take an example from the IMDb dataset, here is an example sentence: This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. After tokenization with BERT, it would look like this: [CLS] This was an absolutely terrible movie . Don' t be lure ##d in by Christopher Walk ##en or Michael Iron ##side . [SEP] Where [CLS] and [SEP] are special tokens, which will be introduced shortly. Note how the word lured was broken up as a consequence. Now that we understand the underlying construct of the BERT model, let's try to use it for transfer learning on the IMDb sentiment classification problem. The first step is preparing the data. All the code for the BERT implementation can be found in the imdb-transfer-learning.ipynb notebook in this chapter's GitHub folder, in the section BERT-based transfer learning. Please run the code in the section titled Loading IMDb training data to ensure the data is loaded prior to proceeding. Tokenization and normalization with BERT After reading the description of the BERT model, you may be bracing yourself for a difficult implementation in code. Have no fear. Our friends at Hugging Face have provided pre-trained models as well as abstractions that make working with advanced models like BERT a breeze. The general flow for getting BERT to work will be: 1. Load a pre-trained model 2. Instantiate a tokenizer and tokenize the data 3. Set up a model and compile it 4. Fit the model on the data [ 133 ] Transfer Learning with BERT These steps won't take more than a few lines of code each. So let's get started. The first step is to install the Hugging Face libraries: !pip install transformers==3.0.2 The tokenizer is the first step – it needs to be imported before it can be used: from transformers import BertTokenizer bert_name = 'bert-base-cased' tokenizer = BertTokenizer.from_pretrained(bert_name, add_special_tokens=True, do_lower_case=False, max_length=150, pad_to_max_length=True) That is all there is to load a pre-trained tokenizer! A few things to note in the code above. First, there are a number of models published by Hugging Face that are available for download. A full list of the models and their names can be found at. Some key BERT models that are available are: Model Name Description bert-base-uncased / bertbase-cased Variants of the base BERT model with 12 encoder layers, hidden size of 768 units, and 12 attention heads for a total of ~110 million parameters. The only difference is whether the inputs were cased or all lowercase. bert-large-uncased / bert-large-cased This model has 24 encoder layers, 1,024 hidden units, and 16 attention heads for a total of ~340 million parameters. Similar split by cased and lowercase models. bert-base-multilingualcased Parameters here are the same as bert-base-cased above, trained on 104 languages with the largest Wikipedia entries. However, it is not recommended to use the uncased version for international languages, while that model is available. bert-base-casedfinetuned-mrpc This model has been fine-tuned on the Microsoft Research Paraphrase Corpus task for paraphrase identification in the news domain. bert-base-japanese Same size as the base model but trained on Japanese text. Note that both the MeCab and WordPiece tokenizers are used. bert-base-chinese Same size as the base model but trained on casedsimplified Chinese and traditional Chinese. [ 134 ] Chapter 4 Any of the values on the left can be used in the bert_name variable above to load the appropriate tokenizer. The second line in the code above downloads the configuration and the vocabulary file from the cloud and instantiates a tokenizer. This loader takes a number of parameters. Since a cased English model is being used, we don't want the tokenizer to convert words to lowercase as specified by the do_lower_case parameter. Note that the default value of this parameter is True. The input sentences will be tokenized to a maximum of 150 tokens, as we saw in the GloVe model as well. pad_to_max_length further indicates that the tokenizer should also pad the sequences it generates. The first argument, add_special_tokens, deserves some explanation. In the example so far, we have taken a sequence and a maximum length. If the sequence is shorter than this maximum length, then the sequence is padded with a special padding token. However, BERT has a special way to encode its sequence due to the next sentence prediction task pre-training. It needs a way to provide two sequences as the input. In the case of classification, like the IMDb sentiment prediction, the second sequence is just left empty. There are three sequences that need to be provided to the BERT model: • input_ids: This corresponds to the tokens in the inputs converted into IDs. This is what we have been doing thus far in other examples. In the IMDb example, we only have one sequence. However, if the problem required passing in two sequences, then a special token, [SEP], would be added in between the sequences. [SEP] is an example of a special token that has been added by the tokenizer. Another special token, [CLS], is appended to the start of the inputs. [CLS] stands for classifier token. The embedding for this token can be viewed as the summary of the inputs in the case of a classification problem, and additional layers on top of the BERT model would use this token. It is also possible to use the sum of the embeddings of all the inputs as an alternative. • token_type_ids: If the input contains two sequences, for a questionanswering problem, for example, then these IDs tell the model indicates which input_ids correspond to which sequence. In some texts, this is referred to as the segment identifiers. The first sequence would be the first segment, and the second sequence would be the second segment. • attention_mask: Given that the sequences are padded, this mask tells the model where the actual tokens end so that the attention calculation does not use the padding tokens. [ 135 ] Transfer Learning with BERT Given that BERT can take two sequences as input, understanding the padding is essential as it can be confusing how padding works in the context of the maximum sequence length when a pair of sequences is provided. The maximum sequence length refers to the combined length of the pair. There are three different ways to do truncation if the combined length exceeds the maximum length. The first two could be to reduce the lengths from either the first or the second sequence. The third way is to truncate from the lengthiest sequence, a token at a time so that the lengths of the pair are only off by one at maximum. In the constructor, this behavior can be configured by passing the truncation_strategy parameter with the values only_ first, only_second, or longest_first. Figure 4.7 shows how an input sequence is converted into the three input sequences listed above: Figure 4.7: Mapping inputs to BERT sequences If the input sequence was Don't be lured, then the figure above shows how it is tokenized with the WordPiece tokenizer as well as the addition of special tokens. The example above sets a maximum sequence length of nine tokens. Only one sequence is provided, hence the token type IDs or segment IDs all have the same value. The attention mask is set to 1, where the corresponding entry in the tokens is an actual token. The following code is used to generate these encodings: tokenizer.encode_plus(" Don't be lured", add_special_tokens=True, max_length=9, pad_to_max_length=True, return_attention_mask=True, return_token_type_ids=True) {'input_ids': [101, 1790, 112, 189, 1129, 19615, 1181, 102, 0], 'token_ type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 0]} [ 136 ] Chapter 4 Even though we won't be using a pair of sequences in this chapter, it is useful to be aware of how the encodings look when a pair is passed. If two strings are passed to the tokenizer, then they are treated as a pair. This is shown in the code below: tokenizer.encode_plus(" Don't be"," lured", add_special_tokens=True, max_length=10, pad_to_max_length=True, return_attention_mask=True, return_token_type_ids=True) {'input_ids': [101, 1790, 112, 189, 1129, 102, 19615, 1181, 102, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 1, 1, 1, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 0]} The input IDs have two separators to distinguish between the two sequences. The token type IDs help distinguish which tokens correspond to which sequence. Note that the token type ID for the padding token is set to 0. In the network, it is never used as all the values are multiplied by the attention mask. To perform encoding of the inputs for all the IMDb reviews, a helper function is defined, as shown below: def bert_encoder(review): txt = review.numpy().decode('utf-8') encoded = tokenizer.encode_plus(txt, add_special_tokens=True, max_length=150, pad_to_max_length=True, return_attention_mask=True, return_token_type_ids=True) return encoded['input_ids'], encoded['token_type_ids'], \ encoded['attention_mask'] The method is pretty straightforward. It takes the input tensor and uses UTF-8 decoding. Using the tokenizer, this input is converted into the three sequences. This would be a great opportunity to implement a different padding algorithm. For example, implement an algorithm that takes the last 150 tokens instead of the first 150 and compare the performance of the two methods. [ 137 ] Transfer Learning with BERT Now, this needs to be applied to every review in the training data: bert_train bert_lbl = bert_train bert_lbl = = [bert_encoder(r) for r, l in imdb_train] [l for r, l in imdb_train] = np.array(bert_train) tf.keras.utils.to_categorical(bert_lbl, num_classes=2) Labels of the reviews are also converted into categorical values. Using the sklearn package, the training data is split into training and validation sets: # create training and validation splits from sklearn.model_selection import train_test_split x_train, x_val, y_train, y_val = train_test_split(bert_train, bert_lbl, test_size=0.2, random_state=42) print(x_train.shape, y_train.shape) (20000, 3, 150) (20000, 2) A little more data processing is required to wrangle the inputs into three input dictionaries in tf.DataSet for easy use in training: tr_reviews, tr_segments, tr_masks = np.split(x_train, 3, axis=1) val_reviews, val_segments, val_masks = np.split(x_val, 3, axis=1) tr_reviews = tr_reviews.squeeze() tr_segments = tr_segments.squeeze() tr_masks = tr_masks.squeeze() val_reviews = val_reviews.squeeze() val_segments = val_segments.squeeze() val_masks = val_masks.squeeze() These training and validation sequences are converted into a dataset like so: def example_to_features(input_ids,attention_masks,token_type_ids,y): return {"input_ids": input_ids, "attention_mask": attention_masks, "token_type_ids": token_type_ids},y [ 138 ] Chapter 4 train_ds = tf.data.Dataset.from_tensor_slices((tr_reviews, tr_masks, tr_segments, y_train)).\ map(example_to_features).shuffle(100).batch(16) valid_ds = tf.data.Dataset.from_tensor_slices((val_reviews, val_masks, val_segments, y_val)).\ map(example_to_features).shuffle(100).batch(16) A batch size of 16 has been used here. The memory of the GPU is the limiting factor here. Google Colab can support a batch length of 32. An 8 GB RAM GPU can support a batch size of 16. Now, we are ready to train a model using BERT for classification. We will see two approaches. The first approach will use a pre-built classification model on top of BERT. This is shown in the next section. The second approach will use the base BERT model and adds custom layers on top to accomplish the same task. This technique will be demonstrated in the section after. Pre-built BERT classification model Hugging Face libraries make it really easy to use a pre-built BERT model for classification by providing a class to do so: from transformers import TFBertForSequenceClassification bert_model = TFBertForSequenceClassification.from_pretrained(bert_name) That was quite easy, wasn't it? Note that the instantiation of the model will require a download of the model from the cloud. However, these models are cached on the local machine if the code is being run from a local or dedicated machine. In the Google Colab environment, this download will be run every time a Colab instance is initialized. To use this model, we only need to provide an optimizer and a loss function and compile the model: optimizer = tf.keras.optimizers.Aadam(learning_rate=2e-5) loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) bert_model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) [ 139 ] Transfer Learning with BERT This model is actually quite simple in layout as its summary shows: bert_model.summary() Model: "tf_bert_for_sequence_classification_7" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= bert (TFBertMainLayer) multiple 108310272 _________________________________________________________________ dropout_303 (Dropout) multiple 0 _________________________________________________________________ classifier (Dense) multiple 1538 ================================================================= Total params: 108,311,810 Trainable params: 108,311,810 Non-trainable params: 0 _________________________________________________________________ So, the model has the entire BERT model, a dropout layer, and a classifier layer on top. This is as simple as it gets. The BERT paper suggests some settings for fine-tuning. They suggest a batch size of 16 or 32, run for 2 to 4 epochs. Further, they suggest using one of the following learning rates for Adam: 5e-5, 3e-5, or 2e-5. Once this model is up and running in your environment, please feel free to train with different settings to see the impact on accuracy. In the previous section, we batched the data into sets of 16. Here, the Adam optimizer is configured to use a learning rate of 2e-5. Let's train this model for 3 epochs. Note that training is going to be quite slow: print("Fine-tuning BERT on IMDB") bert_history = bert_model.fit(train_ds, epochs=3, validation_data=valid_ds) Fine-tuning BERT on IMDB Train for 1250 steps, validate for 313 steps Epoch 1/3 1250/1250 [==============================] - 480s 384ms/step - loss: 0.3567 - accuracy: 0.8320 - val_loss: 0.2654 - val_accuracy: 0.8813 Epoch 2/3 [ 140 ] Chapter 4 1250/1250 [==============================] 0.2009 - accuracy: 0.9188 - val_loss: 0.3571 Epoch 3/3 1250/1250 [==============================] 0.1056 - accuracy: 0.9613 - val_loss: 0.3387 469s 375ms/step - loss: - val_accuracy: 0.8576 470s 376ms/step - loss: - val_accuracy: 0.8883 The validation accuracy is quite impressive for the little work we have done here if it holds on the test set. That needs to be checked next. Using the convenience methods from the previous section, the test data will be tokenized and encoded in the right format: # prep data for testing bert_test = [bert_encoder(r) for r,l in imdb_test] bert_tst_lbl = [l for r, l in imdb_test] bert_test2 = np.array(bert_test) bert_tst_lbl2 = tf.keras.utils.to_categorical (bert_tst_lbl, num_classes=2) ts_reviews, ts_segments, ts_masks = np.split(bert_test2, 3, axis=1) ts_reviews = ts_reviews.squeeze() ts_segments = ts_segments.squeeze() ts_masks = ts_masks.squeeze() test_ds = tf.data.Dataset.from_tensor_slices((ts_reviews, ts_masks, ts_segments, bert_tst_lbl2)).\ map(example_to_features).shuffle(100).batch(16) Evaluating the performance of this model on the test dataset, we get the following: bert_model.evaluate(test_ds) 1563/1563 [==============================] - 202s 129ms/step - loss: 0.3647 - accuracy: 0.8799 [0.3646871318983454, 0.8799] The model accuracy is almost 88%! This is higher than the best GloVe model shown previously, and it took much less code to implement. In the next section, let's try to build custom layers on top of the BERT model to take transfer learning to the next level. [ 141 ] Transfer Learning with BERT Custom model with BERT The BERT model outputs contextual embeddings for all of the input tokens. The embedding corresponding to the [CLS] token is generally used for classification tasks, and it represents the entire document. The pre-built model from Hugging Face returns the embeddings for the entire sequence as well as this pooled output, which represents the entire document as the output of the model. This pooled output vector can be used in future layers to help with the classification task. This is the approach we will take in building a customer model. The code for this section is under the heading Customer Model With BERT in the same notebook as above. The starting point for this exploration is the base TFBertModel. It can be imported and instantiated like so: from transformers import TFBertModel bert_name = 'bert-base-cased' bert = TFBertModel.from_pretrained(bert_name) bert.summary() Model: "tf_bert_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= bert (TFBertMainLayer) multiple 108310272 ================================================================= Total params: 108,310,272 Trainable params: 108,310,272 Non-trainable params: 0 _________________________________________________________________ Since we are using the same pre-trained model, the cased BERT-Base model, we can reuse the tokenized and prepared data from the section above. If you haven't already, take a moment to ensure the code in the Tokenization and normalization with BERT section has been run to prepare the data. [ 142 ] Chapter 4 Now, the custom model needs to be defined. The first layer of this model is the BERT layer. This layer will take three inputs, namely the input tokens, attention masks, and token type IDs: max_seq_len = 150 inp_ids = tf.keras.layers.Input((max_seq_len,), dtype=tf.int64, name="input_ids") att_mask = tf.keras.layers.Input((max_seq_len,), dtype=tf.int64, name="attention_mask") seg_ids = tf.keras.layers.Input((max_seq_len,), dtype=tf.int64, name="token_type_ids") These names need to match the dictionary defined in the training and testing dataset. This can be checked by printing the specification of the dataset: train_ds.element_spec ({'input_ids': TensorSpec(shape=(None, 150), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(None, 150), dtype=tf.int64, name=None), 'token_type_ids': TensorSpec(shape=(None, 150), dtype=tf.int64, name=None)}, TensorSpec(shape=(None, 2), dtype=tf.float32, name=None)) The BERT model expects these inputs in a dictionary. It can also accept the inputs as named arguments, but this approach is clearer and makes it easy to trace the inputs. Once the inputs are mapped, the output of the BERT model can be computed: inp_dict = {"input_ids": inp_ids, "attention_mask": att_mask, "token_type_ids": seg_ids} outputs = bert(inp_dict) # let's see the output structure outputs (, ) [ 143 ] Transfer Learning with BERT The first output has embeddings for each of the input tokens including the special tokens [CLS] and [SEP]. The second output corresponds to the output of the [CLS] token. This output will be used further in the model: x x x x = = = = tf.keras.layers.Dropout(0.2)(outputs[1]) tf.keras.layers.Dense(200, activation='relu')(x) tf.keras.layers.Dropout(0.2)(x) tf.keras.layers.Dense(2, activation='sigmoid')(x) custom_model = tf.keras.models.Model(inputs=inp_dict, outputs=x) The model above is only illustrative, to demonstrate the technique. We add a dense layer and a couple of dropout layers before an output layer. Now, the customer model is ready for training. The model needs to be compiled with an optimizer, loss function, and metrics to watch for: optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5) loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) custom_model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) Here is what this model looks like: custom_model.summary() [ 144 ] Chapter 4 This custom model has 154,202 additional trainable parameters in addition to the BERT parameters. The model is ready to be trained. We will use the same settings from the previous BERT section and train the model for 3 epochs: print("Custom Model: Fine-tuning BERT on IMDB") custom_history = custom_model.fit(train_ds, epochs=3, validation_data=valid_ds) Custom Model: Fine-tuning BERT on IMDB Train for 1250 steps, validate for 313 steps Epoch 1/3 1250/1250 [==============================] 0.5912 - accuracy: 0.8069 - val_loss: 0.6009 Epoch 2/3 1250/1250 [==============================] 0.5696 - accuracy: 0.8570 - val_loss: 0.5643 Epoch 3/3 1250/1250 [==============================] 0.5559 - accuracy: 0.8883 - val_loss: 0.5647 477s 381ms/step - loss: - val_accuracy: 0.8020 469s 375ms/step - loss: - val_accuracy: 0.8646 470s 376ms/step - loss: - val_accuracy: 0.8669 Evaluating on the test set gives an accuracy of 86.29%. Note that the test data encoding steps used in the pretrained BERT model section are used here as well: custom_model.evaluate(test_ds) 1563/1563 [==============================] - 201s 128ms/step - loss: 0.5667 - accuracy: 0.8629 Fine-tuning of BERT is run for a small number of epochs with a small value for Adam's learning rate. If a lot of fine-tuning is done, then there is a risk of BERT forgetting its pretrained parameters. This can be a limitation while building custom models on top as a few epochs may not be sufficient to train the layers that have been added. In this case, the BERT model layer can be frozen, and training can be continued further. Freezing the BERT layer is fairly easy, though it needs the re-compilation of the model: bert.trainable = False # don't train BERT any more optimizer = tf.keras.optimizers.Adam() # standard learning rate custom_model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) [ 145 ] Transfer Learning with BERT We can check the model summary to verify that the number of trainable parameters has changed to reflect the BERT layer being frozen: custom_model.summary() Figure 4.8: Model summary We can see that all the BERT parameters are now set to non-trainable. Since the model was being recompiled, we also took the opportunity to change the learning rate. Changing the sequence length and learning rate during training are advanced techniques in TensorFlow. The BERT model also used 128 as the sequence length for initial epochs, which was changed to 512 later in training. It is also common to see a learning rate increase for the first few epochs and then decrease as training proceeds. Now, training can be continued for a number of epochs like so: print("Custom Model: Keep training custom model on IMDB") custom_history = custom_model.fit(train_ds, epochs=10, validation_data=valid_ds) The training output has not been shown for brevity. Checking the model on the test set yields 86.96% accuracy: custom_model.evaluate(test_ds) [ 146 ] Chapter 4 1563/1563 [==============================] - 195s 125ms/step - loss: 0.5657 - accuracy: 0.8696 If you are contemplating whether the accuracy of this custom model is lower than the pre-trained model, then it is a fair question to ponder over. A bigger network is not always better, and overtraining can lead to a reduction in model performance due to overfitting. Something to try in the custom model is to use the output encodings of all the input tokens and pass them through an LSTM layer or concatenate them together to pass through dense layers and then make the prediction. Having done the tour of the encoder side of the Transformer architecture, we are ready to look into the decoder side of the architecture, which is used for text generation. That will be the focus of the next chapter. Before we go there, let's review everything we covered in this chapter. Summary Transfer learning has made a lot of progress possible in the world of NLP, where data is readily available, but labeled data is a challenge. We covered different types of transfer learning first. Then, we took pre-trained GloVe embeddings and applied them to the IMDb sentiment analysis problem, seeing comparable accuracy with a much smaller model that takes much less time to train. Next, we learned about seminal moments in the evolution of NLP models, starting from encoder-decoder architectures, attention, and Transformer models, before understanding the BERT model. Using the Hugging Face library, we used a pretrained BERT model and a custom model built on top of BERT for the purpose of sentiment classification of IMDb reviews. BERT only uses the encoder part of the Transformer model. The decoder side of the stack is used in text generation. The next two chapters will focus on completing the understanding of the Transformer model. The next chapter will use the decoder side of the stack to perform text generation and sentence completion. The chapter after that will use the full encoder-decoder network architecture for text summarization. Thus far, we have trained embeddings for tokens in the models. A considerable amount of lift can be achieved by using pre-trained embeddings. The next chapter will focus on the concept of transfer learning and the use of pre-trained embeddings like BERT. [ 147 ] 5 Generating Text with RNNs and GPT-2 When your mobile phone completes a word as you type a message or when Gmail suggests a short reply or completes a sentence as you reply to an email, a text generation model is working in the background. The Transformer architecture forms the basis of state-of-the-art text generation models. BERT, as explained in the previous chapter, uses only the encoder part of the Transformer architecture. However, BERT, being bi-directional, is not suitable for the generation of text. A left-to-right (or right-to-left, depending on the language) language model built on the decoder part of the Transformer architecture is the foundation of text generation models today. Text can be generated a character at a time or with words and sentences together. Both of these approaches are shown in this chapter. Specifically, we will cover the following topics: • Generating text with: • Character-based RNNs for generating news headlines and completing text messages • GPT-2 to generate full sentences [ 149 ] Generating Text with RNNs and GPT-2 • Improving the quality of text generation using techniques such as: • Greedy search • Beam search • Top-K sampling • Using advanced techniques such as learning rate annealing and checkpointing to enable long training times: • Details of the Transformer decoder architecture • Details of the GPT and GPT-2 models A character-based approach for generating text is shown first. Such models can be quite useful for generating completions of a partially typed word in a sentence on a messaging platform, for example. Generating text – one character at a time Text generation yields a window into whether deep learning models are learning about the underlying structure of language. Text will be generated using two different approaches in this chapter. The first approach is an RNN-based model that generates a character at a time. In the previous chapters, we have seen different tokenization methods based on words and sub-words. Text is tokenized into characters, which include capital and small letters, punctuation symbols, and digits. There are 96 tokens in total. This tokenization is an extreme example to test how much a model can learn about the language structure. The model will be trained to predict the next character based on a given set of input characters. If there is indeed an underlying structure in the language, the model should pick it up and generate reasonable-looking sentences. Generating coherent sentences one character at a time is a very challenging task. The model does not have a dictionary or vocabulary, and it has no sense of capitalization of nouns or any grammar rules. Yet, we are expecting it to generate reasonable-looking sentences. The structure of words and their order in a sentence is not random but driven by grammar rules in a language. Words have some structure, based on parts of speech and word roots. A character-based model has the smallest possible vocabulary, but we hope that the model learns a lot about the use of the letters. This may seem like a tall order but be prepared to be surprised. Let's get started with the data loading and pre-processing steps. [ 150 ] Chapter 5 Data loading and pre-processing For this particular example, we are going to use data from a constrained domain – a set of news headlines. The hypothesis is that news headlines are usually short and follow a particular structure. These headlines are usually a summary of an article and contain a large number of proper nouns like names of companies and celebrities. For this particular task, data from two different datasets are joined together and used. The first dataset is called the News Aggregator dataset generated by the Artificial Intelligence Lab, part of the Faculty of Engineering at Roma Tre University in Italy. The University of California, Irvine, has made the dataset available for download from. This dataset has over 420,000 news article titles, URLs, and other information. The second dataset is a set of over 200,000 news articles from The Huffington Post, called the News Category dataset, collected by Rishabh Mishra and posted on Kaggle at. com/rmisra/news-category-dataset. News article headlines from both datasets are extracted and compiled into one file. This step is already done to save time. The compressed output file is called newsheadlines.tsv.zip and is located in the chapter5-nlg-with-transformer-gpt/charrnn GitHub folder corresponding to this chapter. The folder is located inside the GitHub repository for this book. The format of this file is pretty simple. It has two columns separated by a tab. The first column is the original headline, and the second column is an uncased version of the same headline. This example uses the first column of the file only. However, you can try the uncased version to see how the results differ. Training such models usually takes a lot of time, often several hours. Training in an IPython notebook can be difficult as a number of issues, such as the loss of the connection to the kernel or the kernel process dying, can result in the loss of the trained model. What we are attempting to do in this example is akin to training BERT from scratch. Don't worry; we train the model for a much shorter time than it took to train BERT. Running long training loops runs the risk of training loops crashing in the middle. In such a case, we don't want to restart training from scratch. The model is checkpointed frequently during training so that the model state can be restored from the last checkpoint if a failure occurs. Then, training can be restarted from the last checkpoint. Python files executed from the command line give the most control when running long training loops. [ 151 ] Generating Text with RNNs and GPT-2 The command-line instructions shown in this example were tested on an Ubuntu 18.04 LTS machine. These commands should work as is on a macOS command line but may need some adjustments. Windows users may need to translate these commands for their operating system. Windows 10 power users should be able to use the Windows Subsystem for Linux (WSL) capabilities to execute the same commands. Going back to the data format, all that needs to be done for loading the data is to unzip the prepared headline file. Navigate to the folder where the ZIP file has been pulled down from GitHub. The compressed file of headlines can be unzipped and inspected: $ unzip news-headlines.tsv.zip Archive: news-headlines.tsv.zip inflating: news-headlines.tsv Let's inspect the contents of the file to get a sense of the data: $ head -3 news-headlines.tsv There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV there were 2 mass shootings in texas last week, but only 1 on tv Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song will smith joins diplo and nicky jam for the 2018 world cup's official song Hugh Grant Marries For The First Time At Age 57 hugh grant marries for the first time at age 57 The model is trained on the headlines shown above. We are ready to move on to the next step and load the file to perform normalization and tokenization. Data normalization and tokenization As discussed above, this model uses a token per character. So, each letter, including punctuation, numbers, and space, becomes a token. Three additional tokens are added. These are: • : Denotes end of sentences. The model can use this token to indicate that the generation of text is complete. All headlines end with this token. [ 152 ] Chapter 5 • : While this is a character-level model, it is possible to have different characters from other languages or character sets in the dataset. When a character is detected that is not present in our set of 96 characters, this token is used. This approach is consistent with word-based vocabulary approaches where it is common to replace out-of-vocabulary words with a special token. • : This is a unique padding token used to pad all headlines to the same length. Padding is done by hand in this example as opposed to using TensorFlow methods, which we have seen previously. All the code in this section will refer to the rnn-train.py file from the chapter5-nlgwith-transformer-gpt folder of the GitHub repo of the book. The first part of this file has the imports and optional instructions for setting up a GPU. Ignore this section if your setup does not use a GPU. A GPU is an excellent investment for deep learning engineers and researchers. A GPU could speed up your training times by orders of magnitude or more! It would be worthwhile to outfit your deep learning setup with a GPU like the Nvidia GeForce RTX 2070. The code for data normalization and tokenization is between lines 32 and 90 of this file. To start, the tokenization function needs to be set up: chars = sorted(set("abcdefghijklmnopqrstuvwxyz0123456789 -,;.!?:'''/\|[email protected]#$%ˆ&*˜'+-=()[]{}' ABCDEFGHIJKLMNOPQRSTUVWXYZ")) chars = list(chars) EOS = '' UNK = "" PAD = "" # need to move mask to '0'index for Embedding layer chars.append(UNK) chars.append(EOS) # end of sentence chars.insert(0, PAD) # now padding should get index of 0 Once the token list is ready, methods need to be defined for converting characters to tokens and vice versa. Creating mapping is relatively straightforward: # Creating a mapping from unique characters to indices char2idx = {u:i for i, u in enumerate(chars)} idx2char = np.array(chars) def char_idx(c): # takes a character and returns an index # if character is not in list, returns the unknown token [ 153 ] Generating Text with RNNs and GPT-2 if c in chars: return char2idx[c] return char2idx[UNK] Now, the data needs can be read in from the TSV file. A maximum length of 75 characters is used for the headlines. If the headlines are shorter than this length, they are padded. Any headlines longer than 75 characters are snipped. The token is appended to the end of every headline. Let's set this up: data = [] MAX_LEN = 75 # load into this list of lists # maximum length of a headline with open("news-headlines.tsv", "r") as file: lines = csv.reader(file, delimiter='\t') for line in lines: hdln = line[0] cnvrtd = [char_idx(c) for c in hdln[:-1]] if len(cnvrtd) >= MAX_LEN: cnvrtd = cnvrtd[0:MAX_LEN-1] cnvrtd.append(char2idx[EOS]) else: cnvrtd.append(char2idx[EOS]) # add padding tokens remain = MAX_LEN - len(cnvrtd) if remain > 0: for i in range(remain): cnvrtd.append(char2idx[PAD]) data.append(cnvrtd) print("**** Data file loaded ****") All the data is loaded into a list with the code above. You may be wondering about the ground truth here for training as we only have a line of text. Since we want this model to generate text, the objective can be reduced to predicting the next character given a set of characters. Hence, a trick will be used to construct the ground truth – we will just shift the input sequence by one character and set it as the expected output. This transformation is quite easy do with numpy: # now convert to numpy array np_data = np.array(data) # for training, we use one character shifted data np_data_in = np_data[:, :-1] np_data_out = np_data[:, 1:] [ 154 ] Chapter 5 With this nifty trick, we have both inputs and expected outputs ready for training. The final step is to convert it into tf.Data.DataSet for ease of batching and shuffling: # Create TF dataset x = tf.data.Dataset.from_tensor_slices((np_data_in, np_data_out)) Now everything is ready to start training. Training the model The code for model training starts at line 90 in the rnn-train.py file. The model is quite simple. It has an embedding layer, followed by a GRU layer and a dense layer. The size of the vocabulary, the number of RNN units, and the size of the embeddings are set up: # Length of the vocabulary in chars vocab_size = len(chars) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 # batch size BATCH_SIZE=256 With the batch size being defined, training data can be batched and ready for use by the model: # create tf.DataSet x_train = x.shuffle(100000, reshuffle_each_iteration=True).batch(BATCH_ SIZE, drop_remainder=True) Similar to code in previous chapters, a convenience method to build models is defined like so: # define the model def build_model(vocab_size, embedding_dim, rnn_units, batch_size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None]), tf.keras.layers.GRU(rnn_units, [ 155 ] Generating Text with RNNs and GPT-2 return_sequences=True, stateful=True, recurrent_initializer='glorot_uniform'), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(vocab_size) ]) return model A model can be instantiated with this method: model = build_model( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) print("**** Model Instantiated ****") print(model.summary()) **** Model Instantiated **** Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (256, None, 256) 24576 _________________________________________________________________ gru (GRU) (256, None, 1024) 3938304 _________________________________________________________________ dropout (Dropout) (256, None, 1024) 0 _________________________________________________________________ dense (Dense) (256, None, 96) 98400 ================================================================= Total params: 4,061,280 Trainable params: 4,061,280 Non-trainable params: 0 _________________________________________________________________ There are just over 4 million trainable parameters in this model. The Adam optimizer, with a sparse categorical loss function, is used for training this model: loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer = 'adam', loss = loss) [ 156 ] Chapter 5 Since training is potentially going to take a long time, we need to set up checkpoints along with the training. If there is any problem in training and training stops, these checkpoints can be used to restart the training from the last saved checkpoint. A directory is created using the current timestamp for saving these checkpoints: # Setup checkpoints # dynamically build folder names dt = datetime.datetime.today().strftime("%Y-%b-%d-%H-%M-%S") # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints/'+dt # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback=tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) A custom callback that saves checkpoints during training is defined in the last line of code above. This is passed to the model.fit() function to be called at the end of every epoch. Starting the training loop is straightforward: print("**** Start Training ****") EPOCHS=25 start = time.time() history = model.fit(x_train, epochs=EPOCHS, callbacks=[checkpoint_callback]) print("**** End Training ****") print("Training time: ", time.time()- start) The model will be trained for 25 epochs. The time taken in training will be logged as well in the code above. The final piece of code uses the history to plot the loss and save it as a PNG file in the same directory: # Plot accuracies lossplot = "loss-" + dt + ".png" plt.plot(history.history['loss']) plt.title('model loss') plt.xlabel('epoch') plt.ylabel('loss') plt.savefig(lossplot) print("Saved loss to: ", lossplot) [ 157 ] Generating Text with RNNs and GPT-2 The best way to start training is to start the Python process so that it can run in the background without needing a Terminal or command-line. On Unix systems, this can be done with the nohup command: $ nohup python rnn-train.py > training.log & This command line starts the process in a way that disconnecting the Terminal would not interrupt the training process. On my machine, this training took approximately 1 hour and 43 minutes. Let's check out the loss curve: Figure 5.1: Loss curve As we can see, the loss decreases to a point and then shoots up. The standard expectation is that loss would monotonically decrease as the model was trained for more epochs. In the case shown above, the loss suddenly shoots up. In other cases, you may observe a NaN, or Not-A-Number, error. NaNs result from the exploding gradient problem during backpropagation through RNNs. The gradient direction causes weights to grow very large quickly and overflow, resulting in NaNs. Given how prevalent this is, there are quite a few jokes about NLP engineers and Indian food to go with the nans (referring to a type of Indian bread). [ 158 ] Chapter 5 The primary reason behind these occurrences is gradient descent overshooting the minima and starting to climb the slope before reducing again. This happens when the steps gradient descent is taking are too large. Another way to prevent the NaN issue is gradient clipping where gradients are clipped to an absolute maximum, preventing loss from exploding. In the RNN model above, a scheme needs to be used that reduces the learning rate over time. Reducing the learning rate over epochs reduces the chances for gradient descent to overshoot the minima. This technique of reducing the learning rate over time is called learning rate annealing or learning rate decay. The next section walks through implementing learning rate decay while training the model. Implementing learning rate decay as custom callback There are two ways to implement learning rate decay in TensorFlow. The first way is to use one of the prebuilt schedulers that are part of the tf.keras.optimizers. schedulers package and use a configured instance with the optimizer. An example of a prebuilt scheduler is InverseTimeDecay, and it can be set up as shown below: lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay( 0.001, decay_steps=STEPS_PER_EPOCH*(EPOCHS/10), decay_rate=2, staircase=False) The first parameter, 0.001 in the example above, is the initial learning rate. The number of steps per epoch can be calculated by dividing the number of training examples by batch size. The number of decay steps determines how the learning rate is reduced. The equation used to compute the learning rate is: 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑛 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖− 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 1 + 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑− 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 𝑟 𝑟 ) 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑− 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 After being set up, all this function needs is the step number for computing the new learning rate. Once the schedule is set up, it can be passed to the optimizer: optimizer = tf.keras.optimizers.Adam(lr_schedule) [ 159 ] Generating Text with RNNs and GPT-2 That's it! The rest of the training loop code is unchanged. However, this learning rate scheduler starts reducing the learning rate from the first epoch itself. A lower learning rate increases the amount of training time. Ideally, we would keep the learning rate unchanged for the first few epochs and then reduce it. Looking at Figure 5.1 above, the learning rate is probably effective until about the tenth epoch. BERT also uses learning rate warmup before learning rate decay. Learning rate warmup generally refers to increasing the learning rate for a few epochs. BERT was trained for 1,000,000 steps, which roughly translates to 40 epochs. For the first 10,000 steps, the learning rate was increased, and then it was linearly decayed. Implementing such a learning rate schedule is better accomplished by a custom callback. Custom callbacks in TensorFlow enable the execution of custom logic at various points during training and inference. We saw an example of a prebuilt callback that saves checkpoints during training. A custom callback provides hooks that enable desired logic that can be executed at various points during training. This main step is to define a subclass of tf.keras.callbacks.Callback. Then, one or more of the following functions can be implemented to hook onto the events exposed by TensorFlow: • on_[train,test,predict]_begin / on_[train,test,predict]_end: This callback happens at the start of training or the end of the training. There are methods for training, testing, and prediction loops. Names for these methods can be constructed using the appropriate stage name from the possibilities shown in brackets. The method naming convention is a common pattern across other methods in the rest of the list. • on_[train,test,predict]_batch_begin / on_[train,test,predict] _batch_ end: These callbacks happen when training for a specific batch starts or ends. • on_epoch_begin / on_epoch_end: This is a training-specific function called at the start or end of an epoch. We will implement a callback for the start of the epoch that adjusts that epoch's learning rate. Our implementation will keep the learning rate constant for a configurable number of initial epochs and then reduce the learning rate in a fashion similar to the inverse time decay function described above. This learning rate would look like the following Figure 5.2: [ 160 ] Chapter 5 Figure 5.2: Custom learning rate decay function First, a subclass is created with the function defined in it. The best place to put this in rnn_train.py is just around the checkpoint callback, before the start of training. This class definition is shown below: class LearningRateScheduler(tf.keras.callbacks.Callback): """Learning rate scheduler which decays the learning rate""" def __init__(self, init_lr, decay, steps, start_epoch): super().__init__() self.init_lr = init_lr # initial learning rate self.decay = decay # how sharply to decay self.steps = steps # total number of steps of decay self.start_epoch = start_epoch # which epoch to start decaying def on_epoch_begin(self, epoch, logs=None): if not hasattr(self.model.optimizer, 'lr'): raise ValueError('Optimizer must have a "lr" attribute.') # Get the current learning rate [ 161 ] Generating Text with RNNs and GPT-2 lr = float(tf.keras.backend.get_value(self.model.optimizer.lr)) if(epoch >= self.start_epoch): # Get the scheduled learning rate. scheduled_lr = self.init_lr / (1 + self.decay * (epoch / self. steps)) # Set the new learning rate tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr) print('\nEpoch %05d: Learning rate is %6.4f.' % (epoch, scheduled_lr)) Using this callback in the training loop requires the instantiation of the callback. The following parameters are set while instantiating the callback: • The initial learning rate is set to 0.001. • The decay rate is set to 4. Please feel free to play around with different settings. • The number of steps is set to the number of epochs. The model is trained for 150 epochs. • Learning rate decay should start after epoch 10, so the start epoch is set to 10. The training loop is updated to include the callback like so: print("**** Start Training ****") EPOCHS=150 lr_decay = LearningRateScheduler(0.001, 4., EPOCHS, 10) start = time.time() history = model.fit(x_train, epochs=EPOCHS, callbacks=[checkpoint_callback, lr_decay]) print("**** End Training ****") print("Training time: ", time.time()- start) print("Checkpoint directory: ", checkpoint_dir) Changes are highlighted above. Now, the model is ready to be trained using the command shown above. Training 150 epochs took over 10 hours on the GPU-capable machine. The loss surface is shown in Figure 5.3: [ 162 ] Chapter 5 Figure 5.3: Model loss after learning rate decay In the figure above, the loss drops very fast for the first few epochs before plateauing near epoch 10. Learning rate decay kicks in at that point, and the loss starts to fall again. This can be verified from a snippet of the log file: ... Epoch 8/150 2434/2434 [==================] - 249s 102ms/step - loss: 0.9055 Epoch 9/150 2434/2434 [==================] - 249s 102ms/step - loss: 0.9052 Epoch 10/150 2434/2434 [==================] - 249s 102ms/step - loss: 0.9064 Epoch 00010: Learning rate is 0.00078947. Epoch 11/150 2434/2434 [==================] - 249s 102ms/step - loss: 0.8949 Epoch 00011: Learning rate is 0.00077320. Epoch 12/150 [ 163 ] Generating Text with RNNs and GPT-2 2434/2434 [==================] - 249s 102ms/step - loss: 0.8888 ... Epoch 00149: Learning rate is 0.00020107. Epoch 150/150 2434/2434 [==================] - 249s 102ms/step - loss: 0.7667 **** End Training **** Training time: 37361.16723680496 Checkpoint directory: ./training_checkpoints/2021-Jan-01-09-55-03 Saved loss to: loss-2021-Jan-01-09-55-03.png Note the highlighted loss above. The loss slightly increased around epoch 10 as learning rate decay kicked in, and the loss started falling again. The small bumps in the loss that can be seen in Figure 5.3 correlate with places where the learning rate was higher than needed, and learning rate decay kicked it down to make the loss go lower. The learning rate started at 0.001 and ended at a fifth of that at 0.0002. Training this model took much time and advanced tricks like learning rate decay to train. But how does this model do in terms of generating text? That is the focus of the next section. Generating text with greedy search Checkpoints were taken during the training process at the end of every epoch. These checkpoints are used to load a trained model for generating text. This part of the code is implemented in an IPython notebook. The code for this section is found in the charRNN-text-generation.ipynb file in this chapter's folder in GitHub. The generation of text is dependent on the same normalization and tokenization logic used during training. The Setup Tokenization section of the notebook has this code replicated. There are two main steps in generating text. The first step is restoring a trained model from the checkpoint. The second step is generating a character at a time from a trained model until a specific end condition is met. The Load the Model section of the notebook has the code to define the model. Since the checkpoints only stored the weights for the layers, defining the model structure is important. The main difference from the training network is the batch size. We want to generate a sentence at a time, so we set the batch size as 1: # Length of the vocabulary in chars vocab_size = len(chars) # The embedding dimension [ 164 ] Chapter 5 embedding_dim = 256 # Number of RNN units rnn_units = 1024 # Batch size BATCH_SIZE=1 A convenience function for setting up the model structure is defined like so: # this one is without padding masking or dropout layer def build_gen gen_model = build_gen_model(vocab_size, embedding_dim, rnn_units, BATCH_SIZE) Note that the embedding layer does not use masking because, in text generation, we are not passing an entire sequence but only part of a sequence that needs to be completed. Now that the model is defined, the weights for the layers can be loaded in from the checkpoint. Please remember to replace the checkpoint directory with your local directory containing the checkpoints from training: checkpoint_dir = './training_checkpoints/' gen_model.load_weights(tf.train.latest_checkpoint(checkpoint_dir)) gen_model.build(tf.TensorShape([1, None])) [ 165 ] Generating Text with RNNs and GPT-2 The second main step is to generate text a character at a time. Generating text needs a seed or a starting few letters, which are completed by the model into a sentence. The process of generation is encapsulated in the function below: def generate_text(model, start_string, temperature=0.7, num_ generate=75): # Low temperatures results in more predictable text. # Higher temperatures results in more surprising text. # Experiment to find the best setting. # Converting our start string to numbers (vectorizing) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # Empty string to store our results text_generated = [] # Here batch size == 1]) # lets break is token is generated # if idx2char[predicted_id] == EOS: # break #end of a sentence reached, let's stop return (start_string + ''.join(text_generated)) [ 166 ] Chapter 5 The generation method takes in a seed string that is used as the starting point for the generation. This seed string is vectorized. The actual generation happens in a loop, where one character is generated at a time and appended to the sequence generated. At every point, the character with the highest likelihood is chosen. Choosing the next letter with the highest probability is called greedy search. However, there is a configuration parameter called temperature, which can be used to adjust the predictability of the generated text. Once probabilities for all characters are predicted, dividing the probabilities by the temperature changes the distribution of the generated characters. Smaller values of the temperature generate text that is closer to the original text. Larger values of the temperature generate more creative text. Here, a value of 0.7 is chosen to bias more on the surprising side. To generate the text, all that is needed is one line of code: print(generate_text(gen_model, start_string=u"Google")) Google plans to release the Xbox One vs. Samsung Galaxy Geaote on Mother's Day Each execution of the command may generate slightly different results. The line generated above, while obviously nonsensical, is pretty well structured. The model has learned capitalization rules and headline structure. Normally, we would not generate text beyond the token, but all 75 characters are generated here for the sake of understanding the model output. Note that the output shown for text generation is indicative. You may see a different output for the same prompt. There is some inherent randomness that is built into this process, which we can try and control by setting random seeds. When a model is retrained, it may end up on a slightly different point on the loss surface, where even though the loss numbers look similar, there may be slight differences in the model weights. Please take the outputs presented in the entire chapter as indicative versus actual. [ 167 ] Generating Text with RNNs and GPT-2 Here are some other examples of seed strings and model outputs, snipped after the end-of-sentence tag: Seed Generated Sentence S&P S&P 500 closes above 190 S&P: Russell Slive to again find any business manufacture S&P closes above 2000 for first tim Beyonce Beyonce and Solange pose together for 'American Idol' contes Beyonce's sister Solange rules' Dawn of the Planet of the Apes' report Beyonce & Jay Z Get Married Note the model's use of quotes in the first two sentences for Beyonce as the seed word. The following table shows the impact of different temperature settings for similar seed words: Seed Temperature Generated Sentence S&P 0.1 S&P 500 Closes Above 1900 For First Tim 0.3 S&P Close to $5.7 Billion Deal to Buy Beats Electronic 0.5 S&P 500 index slips to 7.2%, signaling a strong retail sale 0.9 Kim 0.1 0.3 S&P, Ack Factors at Risk of what you see This Ma Kim Kardashian and Kanye West wedding photos release Kim Kardashian Shares Her Best And Worst Of His First Look At The Met Gala 0.5 Kim Kardashian Wedding Dress Dress In The Works From Fia 0.9 Kim Kardashian's en Generally, the quality of the text goes down at higher values of temperature. All these examples were generated by passing in the different temperature values to the generation function. A practical application of such a character-based model is to complete words in a text messaging or email app. By default, the generate_text() method is generating 75 characters to complete the headline. It is easy to pass in much shorter lengths to see what the model proposes as the next few letters or words. [ 168 ] Chapter 5 The table below shows some experiments of trying to complete the next 10 characters of text fragments. These completions were generated using: print(generate_text(gen_model, start_string=u"Lets meet tom", temperature=0.7, num_generate=10)) Lets meet tomorrow to t Prompt Completion I need some money from ba I need some money from bank chairma Swimming in the p Swimming in the profitabili Can you give me a Can you give me a Letter to are you fr are you from around The meeting is The meeting is back in ex Lets have coffee at S Lets have coffee at Samsung hea Lets have coffee at Staples stor Lets have coffee at San Diego Z Given that the dataset used was only from news headlines, it is biased toward certain types of activities. For example, the second sentence could be completed with pool instead of the model trying to fill it in with profitability. If a more general text dataset was used, then this model could do quite well at generating completions for partially typed words at the end of the sentence. However, there is one limitation that this text generation method has – the use of the greedy search algorithm. The greedy search process is a crucial part of the text generation above. It is one of several ways to generate text. Let's take an example to understand this process. For this example, bigram frequencies were analyzed by Peter Norvig and published on. Over 743 billion English words were analyzed in this work. With 26 characters in an uncased model, there are theoretically 26 x 26 = 676 bigram combinations. However, the article reports that the following bigrams were never seen in roughly 2.8 trillion bigram instances: JQ, QG, QK, QY, QZ, WQ, and WZ. The Greedy Search with Bigrams section of the notebook has code to download and process the full dataset and show the process of greedy search. After downloading the set of all n-grams, bigrams are extracted. A set of dictionaries is constructed to help look up the highest-probability next letter given a starting letter. Then, using some recursive code, a tree is constructed, picking the top three choices for the next letter. In the generation code above, only the top letter is chosen. However, the top three letters are chosen to show how greedy search works and its shortcomings. [ 169 ] Generating Text with RNNs and GPT-2 Using the nifty anytree Python package, a nicely formatted tree can be visualized. This tree is shown in the following figure: Figure 5.4: Greedy search tree starting with WI [ 170 ] Chapter 5 The algorithm was given the task of completing WI in a total of five characters. The preceding tree shows cumulative probabilities for a given path. More than one path is shown so that the branches not taken by greedy search can also be seen. If a three-character word was being built, the highest probability choice is WIN with a probability of 0.243, followed by WIS at 0.01128. If four-letter words are considered, then the greedy search would consider only those words that start with WIN as that was the path with the highest probability considering the first three letters. WIND has the highest probability of 0.000329 in this path. However, a quick scan across all four-letter words shows that the highest probability word should be WITH having a probability of 0.000399. This, in essence, is the challenge of the greedy search algorithm for text generation. Higher-probability options considering joint probabilities are hidden due to optimization at each character instead of cumulative probability. Whether the text is generated a character or a word at a time, greedy search suffers from the same issue. An alternative algorithm, called beam search, allows tracking multiple options, and pruning out the lower-probability options as generation proceeds. The tree shown in Figure 5.4 can also be seen as an illustration of tracking beams of probabilities. To see the power of this technique, a more sophisticated model for generating text would be better. The GPT-2, or Generative Pre-Training, based model published by OpenAI set many benchmarks including in open-ended text generation. This is the subject of the next half of this chapter, where the GPT-2 model is explained first. The next topic is fine-tuning a GPT-2 model for completing email messages. Beam search and other options to improve the quality of the generated text are also shown. Generative Pre-Training (GPT-2) model OpenAI released the first version of the GPT model in June 2018. They followed up with GPT-2 in February 2019. This paper attracted much attention as full details of the large GPT-2 model were not released with the paper due to concerns of nefarious uses. The large GPT-2 model was released subsequently in November 2019. The GPT-3 model is the most recent, released in May 2020. [ 171 ] Generating Text with RNNs and GPT-2 Figure 5.5 shows the number of parameters in the largest of each of these models: Figure 5.5: Parameters in different GPT models The first model used the standard Transformer decoder architecture with twelve layers, each with twelve attention heads and 768-dimensional embeddings, for a total of approximately 110 million parameters, which is very similar to the BERT model. The largest GPT-2 has over 1.5 billion parameters, and the most recently released GPT-3 model's largest variant has over 175 billion parameters! Cost of training language models As the number of parameters and dataset sizes increase, the time taken for training also increases. As per a Lambda Labs article, If the GPT-3 model were to be trained on a single Nvidia V100 GPU, it would take 342 years. Using stock Microsoft Azure pricing, this would cost over $3 million. GPT-2 model training is estimated to run to $256 per hour. Assuming a similar running time as BERT, which is about four days, that would cost about $25,000. If the cost of training multiple models during research is factored in, the overall cost can easily increase ten-fold. At such costs, training these models from scratch is out of reach for individuals and even most companies. Transfer learning and the availability of pre-trained models from companies like Hugging Face make it possible for the general public to use these models. [ 172 ] Chapter 5 The base architecture of GPT models uses the decoder part of the Transformer architecture. The decoder is a left-to-right language model. The BERT model, in contrast, is a bidirectional model. A left-to-right model is autoregressive, that is, it uses tokens generated thus far to generate the next token. Since it cannot see future tokens like a bi-directional model, this language model is ideal for text generation. Figure 5.6 shows the full Transformer architecture with the encoder blocks on the left and decoder blocks on the right: Figure 5.6: Full Transformer architecture with encoder and decoder blocks The left side of Figure 5.6 should be familiar – it is essentially Figure 4.6 from the Transformer model section of the previous chapter. The encoder blocks shown are the same as the BERT model. The decoder blocks are very similar to the encoder blocks with a couple of notable differences. [ 173 ] Generating Text with RNNs and GPT-2 In the encoder block, there is only one source of input – the input sequence and all of the input tokens are available for the multi-head attention to operate on. This enables the encoder to understand the context of the token from both the left and right sides. In the decoder block, there are two inputs to each block. The outputs generated by the encoder blocks are available to all the decoder blocks and fed to the middle of the decoder block through multi-head attention and layer norms. What is layer normalization? Large deep neural networks are trained using the Stochastic Gradient Descent (SGD) optimizer or a variant like Adam. Training large models on big datasets can take a significant amount of time for the model to converge. Techniques such as weight normalization, batch normalization, and layer normalization are aimed at reducing training time by helping models to converge faster while also acting as a regularizer. The idea behind layer normalization is to scale the inputs of a given hidden layer with the mean and standard deviation of the inputs. First, the mean and standard deviation are computed: 𝐻𝐻 𝐻𝐻 2 1 𝜎𝜎 𝑙𝑙 = √ ∑(𝑎𝑎𝑖𝑖𝑙𝑙 − 𝜇𝜇 𝑙𝑙 ) 𝐻𝐻 1 𝜇𝜇𝑙𝑙 = ∑ 𝑎𝑎𝑖𝑖𝑙𝑙 𝐻𝐻 𝑖𝑖𝑖𝑖 𝑖𝑖𝑖𝑖 H denotes the number of hidden units in layer l. Inputs to the layer are normalized using the above-calculated values: 𝑙𝑙 𝑎𝑎 𝑖𝑖 = 𝑔𝑔𝑖𝑖𝑙𝑙 𝜎𝜎𝑖𝑖𝑙𝑙 (𝑎𝑎𝑖𝑖𝑙𝑙 − 𝜇𝜇𝑖𝑖𝑙𝑙 ) where g is a gain parameter. Note that the formulation of the mean and standard deviation is not dependent on the size of the minibatches or dataset size. Hence, this type of normalization can be used for RNNs and other sequence modeling problems. However, the tokens generated by the decoder thus far are fed back through a masked multi-head self-attention and added to the output from the encoder blocks. Masked here refers to the fact that tokens to the right of the token being generated are masked, and the decoder cannot see them. Similar to the encoder, there are several such blocks stacked on top of each other. However, GPT architecture is only one half of the Transformer. This requires some modifications to the architecture. [ 174 ] Chapter 5 The modified architecture for GPT is shown in Figure 5.7. Since there is no encoder block to feed the representation of the input sequence, the multi-head layer is no longer required. The outputs generated by the model are recursively fed back to generate the next token. The smallest GPT-2 model has twelve layers and 768 dimensions for each token. The largest GPT-2 model has 48 layers and 1,600 dimensions per token. To pre-train models of this size, the authors of GPT-2 needed to create a new dataset. Web pages provide a great source of text, but the text comes with quality issues. To solve this challenge, they scraped all outbound links from Reddit, which had received at least three karma points. The assumption made by the authors is that karma points are an indicator of the quality of the web page being linked. This assumption allows scraping a huge set of text data. The resulting dataset was approximately 45 million links. To extract text from the HTML on the web pages, two Python libraries were used: Dragnet and Newspaper. After some quality checks and deduplication, the final dataset was about 8 million documents with 40 GB of text. One exciting thing that the authors did was to remove any Wikipedia documents as they felt many of the test datasets used Wikipedia, and adding these pages would cause an overlap between test and training data sets. The pre-training objective is a standard LM training objective of predicting the next word given a set of previous words: Figure 5.7: GPT architecture (Source: Improving Language Understanding by Generative Pre-Training by Radford et al.) [ 175 ] Generating Text with RNNs and GPT-2 During pre-training, the GPT-2 model is trained with a maximum sequence length of 1,024 tokens. A Byte Pair Encoding (BPE) algorithm is used for tokenization, with a vocabulary size of about 50,000 tokens. GPT-2 uses byte sequences rather than Unicode code points for the byte pair merges. If GPT-2 only used bytes for encoding, then the vocabulary would only be 256 tokens. On the other hand, using Unicode code points would yield a vocabulary of over 130,000 tokens. By cleverly using bytes in BPE, GPT-2 is able to keep the vocabulary size to a manageable 50,257 tokens. Another peculiarity of the tokenizer in GPT-2 is that it converts all text to lowercase and uses spaCy and ftfy tokenizers prior to using BPE. The ftfy library is quite useful for fixing Unicode issues. If these two are not available, then the basic BERT tokenizer is used. There are several ways to encode the inputs to solve various problems, even though the left-to-right model may seem limiting. These are shown in Figure 5.8: Figure 5.8: Input transformations in GPT-2 for different problems (Source: Improving Language Understanding by Generative Pre-Training by Radford et al.) The figure above shows how a pre-trained GPT-2 model can be used for a variety of tasks other than text generation. In each instance, start and end tokens are added before and after the input sequence. In all cases, a linear layer is added to the end that is trained during model fine-tuning. The major advantage being claimed is that many different types of tasks can be accomplished using the same architecture. The topmost architecture in Figure 5.8 shows how it can be used for classification. GPT-2 could be used for IMDb sentiment analysis using this approach, for example. [ 176 ] Chapter 5 The second example is of textual entailment. Textual entailment is an NLP task where the relationship between two fragments of text needs to be established. The first text fragment is called a premise, and the second fragment is called the hypothesis. Different relationships can exist between the premise and hypothesis. The premise can validate or contradict the hypothesis, or they may be unrelated. Let's say the premise is Exercising every day is an important part of a healthy lifestyle and longevity. If the hypothesis is exercise increases lifespan, then the premise entails or validates the hypothesis. Alternatively, if the hypothesis is Running has no benefits, then the premise contradicts the hypothesis. Lastly, if the hypothesis is that lifting weights can build a six-pack, then the premise neither entails nor contradicts the hypothesis. To perform entailment with GPT-2, the premise and hypothesis are concatenated with a delimiter, usually $, in between them. For text similarity, two input sequences are constructed, one with the first text sequence first and the second with the second text sequence first. The output from the GPT model is added together and fed to the linear layer. A similar approach is used for multiple-choice questions. However, our focus in this chapter is text generation. Generating text with GPT-2 Hugging Face's transformers library simplifies the process of generating text with GPT-2. Similar to the pre-trained BERT model, as shown in the previous chapter, Hugging Face provides pre-trained GPT and GPT-2 models. These pre-trained models are used in the rest of the chapter. Code for this and the rest of the sections of this chapter can be found in the IPython notebook named text-generationwith-GPT-2.ipynb. After running the setup, scoot over to the Generating Text with GPT-2 section. A section showing the generation of text with GPT is also provided for reference. The first step in generating text is to download the pre-trained model, and its corresponding tokenizer: from transformers import TFGPT2LMHeadModel, GPT2Tokenizer gpt2tokenizer = GPT2Tokenizer.from_pretrained("gpt2") # add the EOS token as PAD token to avoid warnings gpt2 = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=gpt2tokenizer.eos_token_id) [ 177 ] Generating Text with RNNs and GPT-2 This may take a few minutes as the models need to be downloaded. You may see a warning if spaCy and ftfy are not available in your environment. These two libraries are not mandatory for text generation. The following code can be used to generate text using a greedy search algorithm: # encode context the generation is conditioned on input_ids = gpt2tokenizer.encode('Robotics is the domain of ', return_ tensors='tf') # generate text until the output length # (which includes the context length) reaches 50 greedy_output = gpt2.generate(input_ids, max_length=50) print("Output:\n" + 50 * '-') print(gpt2tokenizer.decode(greedy_output[0], skip_special_tokens=True)) Output: ----------------------------------------------------------Robotics is the domain of the United States Government. The United States Government is the primary source of information on the use of drones in the United States. The United States Government is the primary source of information on the use of drones A prompt was supplied for the model to complete. The model started in a promising manner but soon resorted to repeating the same output. Note that the output shown for text generation is indicative. You may see different outputs for the same prompt. There are a few different reasons for this. There is some inherent randomness that is built into this process, which we can try and control by setting random seeds. The models themselves may be retrained periodically by the Hugging Face team and may evolve with newer versions. Issues with the greedy search were noted in the previous section. Beam search can be considered as an alternative. At each step of generating a token, a set of top probability tokens are kept as part of the beam instead of just the highest-probability token. The sequence with the highest overall probability is returned at the end of the generation. Figure 5.4, in the previous section with a greedy search, can be considered as the output of a beam search algorithm with a beam size of 3. [ 178 ] Chapter 5 Generating text using beam search is trivial: # BEAM SEARCH # activate beam search and early_stopping beam_output = gpt2.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print("Output:\n" + 50 * '-') print(gpt2tokenizer.decode(beam_output[0], skip_special_tokens=True)) Output: -------------------------------------------------Robotics is the domain of science and technology. It is the domain of science and technology. It is the domain of science and technology. It is the domain of science and technology. It is the domain of science and technology. It is the domain Qualitatively, the first sentence makes a lot more sense than the one generated by the greedy search. The early_stopping parameter signals generation to stop when all beams reach the EOS token. However, there is still much repetition going on. One parameter that can be used to control the repetition is by setting a limit on n-grams being repeated: # set no_repeat_ngram_size to 2 beam_output = gpt2.generate( input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=3, early_stopping=True ) print("Output:\n" + 50 * '-') print(gpt2tokenizer.decode(beam_output[0], skip_special_tokens=True)) Output: -------------------------------------------------Robotics is the domain of science and technology. In this article, we will look at some of the most important aspects of [ 179 ] Generating Text with RNNs and GPT-2 robotics and how they can be used to improve the lives of people around the world. We will also take a look This has made a considerable difference in the quality of the generated text. The no_ repeat_ngram_size parameter prevents the model from generating any 3-grams or triplets of tokens more than once. While this improves the quality of the text, using the n-gram constraint can have a significant impact on the quality of the generated text. If the generated text is about The White House, then these three words can only be used once in the entire generated text. In such a case, using the n-gram constraint will be counter-productive. To beam or not to beam Beam search works well in cases where the generated sequence is of a restricted length. As the length of the sequence increases, the number of beams to be maintained and computed increases significantly. Consequently, beam search works well in tasks like summarization and translation but performs poorly in open-ended text generation. Further, beam search, by trying to maximize the cumulative probability, generates more predictable text. The text feels less natural. The following piece of code can be used to get a feel for the various beams being generated. Just make sure that the number of beams is greater than or equal to the number of sequences to be returned: # Returning multiple beams beam_outputs = gpt2.generate( input_ids, max_length=50, num_beams=7, no_repeat_ngram_size=3, num_return_sequences=3, early_stopping=True, temperature=0.7 ) print("Output:\n" + 50 * '-') for i, beam_output in enumerate(beam_outputs): print("\n{}: {}".format(i, gpt2tokenizer.decode(beam_output, skip_special_tokens=True))) [ 180 ] Chapter 5 Output: -------------------------------------------------0: Robotics is the domain of the U.S. Department of Homeland Security. The agency is responsible for the security of the United States and its allies, including the United Kingdom, Canada, Australia, New Zealand, and the European Union. 1: Robotics is the domain of the U.S. Department of Homeland Security. The agency is responsible for the security of the United States and its allies, including the United Kingdom, France, Germany, Italy, Japan, and the European Union. 2: Robotics is the domain of the U.S. Department of Homeland Security. The agency is responsible for the security of the United States and its allies, including the United Kingdom, Canada, Australia, New Zealand, the European Union, and the United The text generated is very similar but differs near the end. Also, note that temperature is available to control the creativity of the generated text. There is another method for improving the coherence and creativity of the text being generated called Top-K sampling. This is the preferred method in GPT-2 and plays an essential role in the success of GPT-2 in story generation. Before explaining how this works, let's try it out and see the output: # Top-K sampling tf.random.set_seed(42) # for reproducible results beam_output = gpt2.generate( input_ids, max_length=50, do_sample=True, top_k=25, temperature=2 ) print("Output:\n" + 50 * '-') print(gpt2tokenizer.decode(beam_output[0], skip_special_tokens=True)) Output: -------------------------------------------------Robotics is the domain of people with multiple careers working with robotics systems. The purpose of Robotics & Machine Learning in Science [ 181 ] Generating Text with RNNs and GPT-2 and engineering research is not necessarily different for any given research type because the results would be much more diverse. Our team uses The above sample was generated by selecting a high temperature value. A random seed was set to ensure repeatable results. The Top-K sampling method was published in a paper titled Hierarchical Neural Story Generation by Fan Lewis and Dauphin in 2018. The algorithm is relatively simple – at every step, it picks a token from the top K highest probability tokens. If K is set to 1, then this algorithm is identical to the greedy search. In the code example above, the model looks at the 25 top tokens out of the 50,000+ tokens while generating text. Then, it picks a random word from these and continues the generation. Choosing larger values will result in more surprising or creative text. Choosing lower values of K will result in more predictable text. If you are a little underwhelmed by the results thus far, that is because the prompt selected is a really tough one. Consider this output generated with Top-K of 50 for the prompt In the dark of the night, there was a: In the dark of the night, there was a sudden appearance of light. Sighing, Xiao Chen slowly stood up and looked at Tian Cheng standing over. He took a step to look closely at Tian Cheng's left wrist and frowned. Lin Feng was startled, and quickly took out a long sword! Lin Feng didn't understand what sort of sword that Long Fei had wielded in the Black and Crystal Palace! The Black and Crystal Palace was completely different than his original Black Stone City. Long Fei carried a sword as a souvenir, which had been placed on the back of his father's arm by Tian Cheng. [ 182 ] Chapter 5 He drew the sword from his dad's arm again! The black blade was one of the most valuable weapons within the Black and Crystal Palace. The sword was just as sharp as the sharpest of all weapons, which had been placed on Long Fei's father's arm by the Black Stone City's Black Ice, for him to The above longer form text was generated by the smallest GPT-2 model, which has roughly 124 million parameters. Several different settings and model sizes are available for you to now play with. Remember, with great power comes great responsibility. Between the last chapter and this one, we have covered both the encoder and decoder parts of the Transformer architecture conceptually. Now, we are ready to put both parts together in the next chapter. Let's quickly review what we covered in this chapter. Summary Generating text is a complicated task. There are practical uses that can make typing text messages or composing emails easier. On the other hand, there are creative uses, like generating stories. In this chapter, we covered a character-based RNN model to generate headlines one character at a time and noted that it picked up the structure, capitalization, and other things quite well. Even though the model was trained on a particular dataset, it showed promise in completing short sentences and partially typed words based on the context. The next section covered the state-of-the-art GPT-2 model, which is based on the Transformer decoder architecture. The previous chapter had covered the Transformer encoder architecture, which is used by BERT. Generating text has many knobs to tune like temperature to resample distributions, greedy search, beam search, and Top-K sampling to balance the creativity and predictability of the generated text. We saw the impact of these settings on text generation and used a pre-trained GPT-2 model provided by Hugging Face to generate text. Now that both the encoder and decoder parts of the Transformer architecture have been covered, the next chapter will use the full Transformer to build a text summarization model. Text summarization is at the cutting edge of NLP today. We will build a model that will read news articles and summarize them in a few sentences. Onward! [ 183 ] 6 Text Summarization with Seq2seq Attention and Transformer Networks Summarizing a piece of text challenges a deep learning model's understanding of language. Summarization can be considered a uniquely human ability, where the gist of a piece of text needs to be understood and phrased. In the previous chapters, we have built components that can help in summarization. First, we used BERT to encode text and perform sentiment analysis. Then, we used a decoder architecture with GPT-2 to generate text. Putting the Encoder and Decoder together yields a summarization model. In this chapter, we will implement a seq2seq EncoderDecoder with Bahdanau Attention. Specifically, we will cover the following topics: • Overview of extractive and abstractive text summarization • Building a seq2seq model with attention to summarize text • Improving summarization with beam search • Addressing beam search issues with length normalizations • Measuring the performance of summarization with ROUGE metrics • A review of state-of-the-art summarization The first step of this journey begins with understanding the main ideas behind text summarization. It is important to understand the task before building a model. [ 185 ] Text Summarization with Seq2seq Attention and Transformer Networks Overview of text summarization The core idea in summarization is to condense long-form text or articles into a short representation. The shorter representation should contain the main idea of crucial information from the longer form. A single document can be summarized. This document could be long or may contain just a couple of sentences. An example of a short document summarization is generating a headline from the first few sentences of an article. This is called sentence compression. When multiple documents are being summarized, they are usually related. They could be the financial reports of a company or news reports about an event. The generated summary could itself be long or short. A shorter summary would be desirable when generating a headline. A lengthier summary would be something like an abstract and could have multiple sentences. There are two main approaches when summarizing text: • Extractive summarization: Phrases or sentences from the articles are selected and put together to create a summary. A mental model for this approach is using a highlighter on the long-form text, and the summary is the highlights put together. Extractive summarization is a more straightforward approach as sentences from the source text can be copied, which leads to fewer grammatical issues. The quality of the summarization is also easier to measure using metrics such as ROUGE. This metric is detailed later in this chapter. Extractive summarization was the predominant approach before deep learning and neural networks. • Abstractive summarization: A person may use the full vocabulary available in a language while summarizing an article. They are not restricted to only using words from the article. The mental model is that the person is penning a new piece of text. The model must have some understanding of the meaning of different words so that the model can use them in the summary. Abstractive summarization is quite hard to implement and evaluate. The advent of the seq2seq architecture made significant improvements to the quality of abstractive summarization models. This chapter focuses on abstractive summarization. Here are some examples of summaries that our model can generate: [ 186 ] Chapter 6 Source text Generated summary american airlines group inc said on sunday it plans to raise ## billion by selling shares and convertible senior notes , to improve the airline's liquidity as it grapples with travel restrictions caused by the coronavirus . american airlines to raise ## bln convertible bond issue sales of newly-built single-family houses occurred at a seasonally adjusted annual rate of ## in may , that represented a #.#% increase from the downwardly revised pace of ## in april . new home sales rise in may jc penney will close another ## stores for good . the department store chain , which filed for bankruptcy last month , is inching toward its target of closing ## stores . jc penney to close more stores The source text was pre-processed to be all in lowercase, and numbers were replaced with placeholder tokens to prevent the model from inventing numbers in the summary. The generated summaries have some words highlighted. Those words were not present in the source text. The model was able to propose these words in the summary. Thus, the model is an abstractive summarization model. So, how can such a model be built? One way of looking at the summarization problem is that the model is translating an input sequence of tokens into a smaller set of output tokens. The model learns the output lengths based on the supervised examples provided. Another wellknown problem is mapping an input sequence to an output sequence – the problem of Neural Machine Translation or NMT. In NMT, the input sequence could be a sentence from the source language, and the output could be a sequence of tokens in the target language. The process for translation is as follows: 1. Convert the input text into tokens 2. Learn embeddings for these tokens 3. Pass the token embeddings through an encoder to calculate the hidden states and outputs 4. Use the hidden states with the attention mechanism for generating a context vector for the inputs 5. Pass encoder outputs, hidden states, and context vectors to the decoder part of the network 6. Generate the outputs from left to right using an autoregressive model [ 187 ] Text Summarization with Seq2seq Attention and Transformer Networks Google AI published a tutorial on NMT using a seq2seq attention model in July 2017. This model uses a left-to-right encoder with GRU cells. The Decoder also uses GRU cells. In summarization, the piece of text to be summarized is a prerequisite. This may or may not be valid for machine translation. In some cases, the translation is performed on the fly. In that case, a left-to-right encoder is useful. However, if the entire text to be translated or summarized is available from the outset, a bidirectional Encoder can encode context from both sides of a given token. BiRNN in the Encoder leads to much better performance of the overall model. The NMT tutorial code serves as inspiration for the seq2seq attention model and the attention tutorial referenced previously. Before we work on the model, let's look at the datasets that are used for this purpose. Data loading and pre-processing There are several summarization-related datasets available for training. These datasets are available through the TensorFlow Datasets or tfds package, which we have used in the previous chapters as well. The datasets that are available differ in length and style. The CNN/DailyMail dataset is one of the most commonly used datasets. It was published in 2015, with approximately a total of 1 million news articles. Articles from CNN, starting in 2007, and Daily Mail, starting in 2010, were collected until 2015. The summaries are usually multi-sentence. The Newsroom dataset, available from, contains over 1.3 million news articles from 38 publications. However, this dataset requires that you register to download it, which is why it is not used in this book. The wikiHow data set contains full Wikipedia article pages and the summary sentences for those articles. The LCSTS data set contains Chinese language data collected from Sina Weibo with paragraphs and their one-sentence summaries. Another popular dataset is the Gigaword dataset. It provides the first one or two sentences of a news story and has the headline of the story as the summary. This dataset is quite large, with just under 4 million rows. This dataset was published in a paper titled Annotated Gigaword by Napoles et al. in 2011. It is quite easy to import this dataset using tfds. Given the large size of the dataset and long training times for the model, the training code is stored in Python files, while the inference code is in an IPython notebook. This pattern was used in the previous chapter as well. The code for training is in the s2s-training.py file. The top part of the file contains the imports and a method called setupGPU() to initialize the GPU. The file contains a main function, which provides the control flow, and several functions that perform specific actions. [ 188 ] Chapter 6 The dataset needs to be loaded first. The code for loading the data is in the load_ data() function: def load_data(): print(" Loading the dataset") (ds_train, ds_val, ds_test), ds_info = tfds.load( 'gigaword', split=['train', 'validation', 'test'], shuffle_files=True, as_supervised=True, with_info=True, ) return ds_train, ds_val, ds_test The corresponding section in the main function looks like this: if __name__ == "__main__": setupGPU() # OPTIONAL – only if using GPU ds_train, _, _ = load_data() Only the training dataset is being loaded. The validation dataset contains approximately 190,000 examples, while the test split contains over 1,900 examples. In contrast, the training set contains over 3.8 million examples. Depending on the internet connection, downloading the dataset may take a while: Downloading and preparing dataset gigaword/1.2.0 (download: 551.61 MiB, generated: Unknown size, total: 551.61 MiB) to /xxx/tensorflow_ datasets/gigaword/1.2.0... /xxx/anaconda3/envs/tf21g/lib/python3.7/site-packages/urllib3/ connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host 'drive.google.com'. Adding certificate verification is strongly advised. See: en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning, InsecureRequestWarning, Shuffling and writing examples to /xxx/tensorflow_datasets/ gigaword/1.2.0.incomplete1FP5M4/gigaword-train.tfrecord 100% 100% 1950/1951 [00:00 end_token]) print('Predicted Caption: {}'.format(predicted_sentence)) The only thing remaining now is instantiating a ResNet50 model to extract features from image files on the fly: rs50 = tf.keras.applications.ResNet50( include_top=False, weights="imagenet", # no pooling input_shape=(224, 224, 3) ) new_input = rs50.input hidden_layer = rs50.layers[-1].output features_extract = tf.keras.Model(new_input, hidden_layer) [ 277 ] Multi-Modal Networks and Image Captioning with ResNets and Transformer Networks It's the moment of truth, finally! Let's try out the model on an image. We will load the image, pre-process it for ResNet50, and extract the features from it: # from keras image = load_img("./beach-surf.jpg", target_size=(224, 224)) image = img_to_array(image) image = np.expand_dims(image, axis=0) # batch of one image = preprocess_input(image) # from resnet eval_img = features_extract.predict(image) caption(eval_img) The following is the example image and its caption: Figure 7.14: Generated caption - A man is riding a surfboard on a wave This looks like an amazing caption for the given image! However, the overall accuracy of the model is in the low 30s. There is a lot of scope for improvement in the model. The next section talks about the state-of-the-art techniques for image captioning and also proposes some simpler ideas that you can try and play around with. Note that you may see slightly different results. The reviewer for this book got the result A man in a black shirt is riding a surfboard while running this code. This is expected as slight differences in the probabilities and the exact place where the model stops training in the loss surface is not exact. We are operating in the probabilistic realm here, so there may be slight differences. You may have experienced similar differences in the text generation and summarization code in the previous chapters as well. [ 278 ] Chapter 7 The following image shows some more examples of images and their captions. The notebook contains several good, as well as some atrocious, examples of the generated labels: Figure 7.15: Examples of images and their generated captions None of these images were in the training set. The caption quality goes down from top to bottom. Our model understands close up, cake, groups of people, sandy beaches, streets, and luggage, among other things. However, the bottom two examples are concerning. They hint at some bias in the model. In both of the bottom two images, the model is misinterpreting gender. [ 279 ] Multi-Modal Networks and Image Captioning with ResNets and Transformer Networks The images were deliberately chosen to show a woman in a business suit and women playing basketball. In both cases, the model proposes men in the captions. When the model was tried with a female tennis player's image, it guessed the right gender, but it changed genders in an image from a women's soccer game. Bias in models is a very important concern. In cases such as image captioning, this bias is immediately apparent. In fact, over 600,000 images were removed from the ImageNet database () in 2019 after bias was found in how it classifies and tags people in its pictures. ResNet50 is pre-trained on ImageNet. However, in other models, the bias may be harder to detect. Building fair deep learning models and reducing bias in models are active areas of research in the ML community. You may have noticed that we skipped running the model on an evaluation set and on the test set. This was done for brevity, and also because those techniques were covered previously. A quick note on metrics for evaluating the quality of captions. We saw ROUGE metrics in the previous chapters. ROUGE-L is still applicable in the case of image captioning. You can use a mental model of the caption as a summary of an image, as opposed to the summary of a paragraph in text summarization. There can be more than one way to express the summary, and ROUGE-L tries to capture the intent. There are two other commonly reported metrics: • BLEU: This stands for Bilingual Evaluation Understudy and is the most popular metric in machine translation. We can cast the image captioning problem as a machine translation problem as well. It relies on n-grams for computing the overlap of the predicted text with a number of reference texts and combines the results into one score. • CIDEr: This stands for Consensus-Based Image Description Evaluation and was proposed in a paper by the same name in 2015. It tries to deal with the difficulty of automatic evaluation when multiple captions could be reasonable by combining TF-IDF and n-grams. The metric tries to compare the captions generated by the model against multiple captions by human annotators and tries to score them based on consensus. Before wrapping up this chapter, let's spend a little time discussing ways to improve performance and state-of-the-art models. [ 280 ] Chapter 7 Improving performance and state-of-theart models Let's first talk through some simple experiments you can try to improve performance before talking about the latest models. Recall our discussion on positional encodings for inputs in the Encoder. Adding or removing positional encodings helps or hinders performance. In the previous chapter, we implemented the beam search algorithm for generating summaries. You can adapt the beam search code and see an improvement in the results with beam search. Another avenue of exploration is the ResNet50. We used a pre-trained network and did not fine-tune it further. It is possible to build an architecture where ResNet is part of the architecture and not a pre-processing step. Image files are loaded in, and features are extracted from ResNet50 as part of the VisualEncoder. ResNet50 layers can be trained from the get-go, or only in the last few iterations. This idea is implemented in the resnetfinetuning.py file for you to try. Another line of thinking is using a different object detection model than ResNet50 or using the output from a different layer. You can try a more complex version of ResNet like ResNet152, or a different object detection model like Detectron from Facebook or other models. It should be quite easy to use a different model in our code as it is quite modular. When you use a different model for extracting image features, the key will be to make sure tensor dimensions are flowing properly through the Encoder. The Decoder should not require any changes. Depending on the complexity of the model, you can either preprocess and store the image features or compute them on the fly. Recall that we just used the pixels from the image directly. This was based on a paper published recently at CVPR titled Pixel-BERT. Most models use region proposals extracted from images instead of the pixels directly. Object detection in an image involves drawing a boundary around that object in the image. Another way to perform the same task is to classify each pixel into an object or background. These region proposals can be in the form of bounding boxes in an image. State-of-the-art models use bounding boxes or region proposals as input. [ 281 ] Multi-Modal Networks and Image Captioning with ResNets and Transformer Networks The second-biggest gain in image captioning comes from pre-training. Recall that BERT and GPT are pre-trained on specific pre-training objectives. Models differ based on whether the Encoder is pre-trained or both the Encoder and Decoder are pre-trained. A common pre-training objective is a version of the BERT MLM task. Recall that BERT inputs are structured as [CLS] I1 I2 … In [SEP] J1 J2 … Jk [SEP], where some of the tokens from the input sequence are masked. This is adapted for image captioning, where the image features and caption tokens in the input are concatenated. Caption tokens are masked similar to how they are in the BERT model, and the pre-training objective is for the model to predict the masked token. After pre-training, the output of the CLS token can be used for classification or fed to the Decoder to generate the caption. Care must be exercised to not pre-train on the same dataset, like that for evaluation. An example of the setup could be using the Visual Genome and Flickr30k datasets for pre-training and COCO for fine-tuning. Image captioning is an active area of research. The research is just getting started on multi-modal networks in general. Now, let's recap everything we've learned in this chapter. Summary In the world of deep learning, specific architectures have been developed to handle specific modalities. Convolutional Neural Networks (CNNs) have been incredibly effective in processing images and is the standard architecture for CV tasks. However, the world of research is moving toward the world of multi-modal networks, which can take multiple types of inputs, like sounds, images, text, and so on and perform cognition like humans. After reviewing multi-modal networks, we dived into vision and language tasks as a specific focus. There are a number of problems in this particular area, including image captioning, visual question answering, VCR, and text-to-image, among others. Building on our learnings from previous chapters on seq2seq architectures, custom TensorFlow layers and models, custom learning schedules, and custom training loops, we implemented a Transformer model from scratch. Transformers are state of the art at the time of writing. We took a quick look at the basic concepts of CNNs to help with the image side of things. We were able to build a model that may not be able to generate a thousand words for a picture but is definitely able to generate a human-readable caption. Its performance still needs improvement, and we discussed a number of possibilities so that we can try to do so, including the latest techniques. [ 282 ] Chapter 7 It is apparent that deep models perform very well when they contain a lot of data. The BERT and GPT models have shown the value of pre-training on massive amounts of data. It is still very hard to get good quality labeled data for use in pre-training or fine-tuning. In the world of NLP, we have a lot of text data, but not enough labeled data. The next chapter focuses on weak supervision to build classification models that can label data for pre-training or even fine-tuning tasks. [ 283 ] 8 Weakly Supervised Learning for Classification with Snorkel Models such as BERT and GPT use massive amounts of unlabeled data along with an unsupervised training objective, such as a masked language model (MLM) for BERT or a next word prediction model for GPT, to learn the underlying structure of text. A small amount of task-specific data is used for fine-tuning the pre-trained model using transfer learning. Such models are quite large, with hundreds of millions of parameters, and require massive datasets for pre-training and lots of computation capacity for training and pre-training. Note that the critical problem being solved is the lack of adequate training data. If there were enough domain-specific training data, the gains from BERT-like pre-trained models would not be that big. In certain domains such as medicine, the vocabulary used in task-specific data is typical for the domain. Modest increases in training data can improve the quality of the model to a large extent. However, hand labeling data is a tedious, resource-intensive, and unscalable task for the amounts required for deep learning to be successful. We discuss an alternative approach in this chapter, based on the concept of weak supervision. Using the Snorkel library, we label tens of thousands of records in a couple of hours and exceed the accuracy of the model developed in Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding using, BERT. This chapter covers: • An overview of weakly supervised learning • An overview of the differences between generative and discriminative models [ 285 ] Weakly Supervised Learning for Classification with Snorkel • Building a baseline model with handcrafted features for labeling data • Snorkel library basics • Augmenting training data using Snorkel labeling functions at scale • Training models using noisy machine-labeled data It is essential to understand the concept of weakly supervised learning, so let's cover that first. Weak supervision Deep learning models have delivered incredible results in the recent past. Deep learning architectures obviated the need for feature engineering, given enough training data. However, enormous amounts of data are needed for a deep learning model to learn the underlying structure of the data. On the one hand, deep learning reduced the manual effort required to handcraft features, but on the other, it significantly increased the need for labeled data for a specific task. In most domains, gathering a sizable set of high-quality, labeled data is an expensive and resourceintensive task. This problem can be solved in several different ways. In previous chapters, we have seen the use of transfer learning to train a model on a large dataset before fine-tuning the model for a specific task. Figure 8.1 shows this and other approaches to acquiring labels: Figure 8.1: Options for getting more labeled data [ 286 ] Chapter 8 Hand labeling the data is a common approach. Ideally, we have enough time and money to hire subject matter experts (SMEs) to hand label every piece of data, which is not practical. Consider labeling a tumor detection dataset and hiring oncologists for the labeling task. Labeling data is probably way lower in priority for an oncologist than treating tumor patients. In a previous company, we organized pizza parties where we would feed people lunch for labels. In an hour, a person could label about 100 records. Feeding 10 people monthly for a year resulted in 12,000 labeled records! This scheme was useful for ongoing maintenance of models, where we would sample the records that were out of distribution, or that the model had shallow confidence in. Thus, we adopted active learning, which determines the records upon labeling, which would have the highest impact on the performance of a classifier. Another option is to hire labelers that are not experts but are more abundant and cheaper. This is the approach taken by the Amazon Mechanical Turk service. There are a large number of companies that provide labeling services. Since the labelers are not experts, the same record is labeled by multiple people, and some mechanism, like majority vote, is used to decide on the final label of the record. The charge for labeling one record by one labeler may vary from a few cents to a few dollars depending on the complexity of the steps needed for associating a label. The output of such a process is a set of noisy labels that have high coverage, as long as your budget allows for it. We still need to figure out the quality of the labels acquired to see how these labels can be used in the eventual model. Weak supervision tries to address the problem differently. What if, using heuristics, an SME could hand label thousands of records in a fraction of the time? We will work on the IMDb movie review dataset and try to predict the sentiment of the review. We used the IMDb dataset in Chapter 4 , Transfer Learning with BERT, where we explored transfer learning. It is appropriate to use the same example to show an alternate technique to transfer learning. Weak supervision techniques don't have to be used as substitutes for transfer learning. Weak supervision techniques help create larger domain-specific labeled datasets. In the absence of transfer learning, a larger labeled dataset improves model performance even with noisy labels coming from weak supervision. However, the gain in model performance will be even more significant if transfer learning and weak supervision are both used together. [ 287 ] Weakly Supervised Learning for Classification with Snorkel An example of a simple heuristic function for labeling a review as having a positive sentiment can be shown with the following pseudocode: if movie.review has "amazing acting" in it: then sentiment is positive While this may seem like a trivial example for our use case, you will be surprised how effective it can be. In a more complicated setting, an oncologist can provide some of these heuristics and define a few of these functions, which can be called labeling functions, to label some records. These functions may conflict or overlap with each other, similar to crowdsourced labels. Another approach for getting labels is through distant supervision. An external knowledge base, like Wikipedia, can be used to label data records heuristically. In a Named-Entity Recognition (NER) use case, a gazetteer is used to match entities to a list of known entities, as discussed in Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs. In relation extraction between entities, for example, employee of or spouse of, the Wikipedia page of an entity can be mined to extract the relation, and the data record can be labeled. There are other methods of obtaining these labels, such as using thorough knowledge of the underlying distributions generating the data. For a given data set, there can be several sources for labels. Each crowdsourced labeler is a source. Each heuristic function, like the "amazing acting" one shown above, is also a source. The core problem in weak supervision is combining these multiple sources to yield labels of sufficient quality for the final classifier. The key points of the model are described in the next section. The domain-specific model is being referred to as the classifier in this chapter as the example we are taking is the binary classification of movie review sentiment. However, the labels generated can be used for a variety of domain-specific models. Inner workings of weak supervision with labeling functions The idea that a few heuristic labeling functions with low coverage and less than perfect accuracy can help improve the accuracy of a discriminative model sounds fantastic. This section provides a high-level overview of how this works, before we see it in practice on the IMDb sentiment analysis dataset. We assume a binary classification problem for the sake of explanation though the scheme works for any number of labels. The set of labels for binary classification is {NEG, POS}. We have a set of unlabeled data points, X, with m samples. [ 288 ] Chapter 8 Note that we do not have access to the actual labels for these data points, but we represent the generated labels using Y. Let's assume we have n labeling functions LF1 to LFn, each of which produces a label. However, we add another label for weak supervision – an abstain label. Each labeling function has the ability to choose whether it wants to apply a label or abstain from labeling. This is a vital aspect of the weak supervision approach. Hence, the set of labels produced by labeling functions is expanded to {NEG, ABSTAIN, POS}. In this setting, the objective is to train a generative model which models two things: • The probability of a given labeling function abstaining for a given data point • The probability of a given labeling function correctly assigning a label to a data point By applying all the labeling functions on all the data points, we generate an m × n matrix of data points and their labels. The label generated by the heuristic LFj on the data point Xi can be represented by: 𝐻𝐻𝐻𝐻𝑖𝑖𝑖𝑖𝑖 = 𝐿𝐿𝐿𝐿𝑗𝑗 (𝑋𝑋𝑚𝑚 ) The generative model is trying to learn from the agreements and disagreements between the labeling functions to learn the parameters. Generative versus Discriminative models If we have a set of data, X, and labels, Y corresponding to the data, then we can say that the discriminative model tries to capture the conditional probability p(Y | X). A generative model captures the joint probability p(X, Y). Generative models, as their name implies, can generate new data points. We saw examples of generative models in Chapter 5, Generating Text with RNNs and GPT-2, where we generated news headlines. GANs (Generative Adversarial Networks) and AutoEncoders are well-known generative models. Discriminative models label data points in a given data set. It does so by drawing a plane in the space of features that separates the data points into different classes. Classifiers, like the IMDb sentiment review prediction model, are typically discriminative models. As can be imagined, generative models have a much more challenging task of learning the whole underlying structure of the data. [ 289 ] Weakly Supervised Learning for Classification with Snorkel The parameter weights, w, of the generative model Pw can be estimated via: 𝑤𝑤 𝑤 𝑤 𝑤𝑤𝑤𝑤𝑤𝑤 log 𝑤𝑤 ∑ 𝑌𝑌𝑌𝑌𝑌{𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁}𝑚𝑚 𝑃𝑃𝑤𝑤 (𝐻𝐻𝐻𝐻, 𝑌𝑌𝑌 Not that the log marginal likelihood of the observed labels factors out the predicted labels Y. Hence, this generative model works in an unsupervised fashion. Once the parameters of the generative model are computed, we can predict the labels for the data points as: ̂𝑖𝑖 = 𝑃𝑃𝑤𝑤̂ (𝑌𝑌𝑖𝑖 | 𝐿𝐿𝐿𝐿) 𝑌𝑌 ̂𝑖𝑖 represents the Where Yi represents labels based on labeling functions and 𝑌𝑌 predicted label from the generative model. These predicted labels can be fed to a downstream discriminative model for classification. These concepts were implemented in the Snorkel library. The authors of the Snorkel library were the key contributors to introducing the Data Programming approach, in a paper of the same name presented at the Neural Information Process Systems conference in 2016. The Snorkel library was introduced formally in a paper titled Snorkel: rapid training data creation with weak supervision by Ratner et al. in 2019. Apple and Google have published papers using the Snorkel library, with papers on Overton and Snorkel Drybell, respectively. These papers can provide an in-depth discussion of the mathematical proof underlying the creation of training data with weak supervision. As complex as the underlying principles may be, using Snorkel for labeling data is not difficult in practice. Let us get started by preparing the data set. Using weakly supervised labels to improve IMDb sentiment analysis Sentiment analysis of movie reviews on the IMDb website is a standard task for classification-type Natural Language Processing (NLP) models. We used this data in Chapter 4 to demonstrate transfer learning with GloVe and VERT embeddings. The IMDb data set has 25,000 training examples and 25,000 testing examples. The dataset also includes 50,000 unlabeled reviews. In previous attempts, we ignored these unsupervised data points. Adding more training data will improve the accuracy of the model. However, hand labeling would be a time-consuming and expensive exercise. We'll use Snorkel-powered labeling functions to see if the accuracy of the predictions can be improved on the testing set. [ 290 ] Chapter 8 Pre-processing the IMDb dataset Previously, we used the tensorflow_datasets package to download and manage the dataset. However, we need lower-level access to the data to enable writing the labeling functions. Hence, the first step is to download the dataset from the web. The code for this chapter is split across two files. The snorkellabeling.ipynb file contains the code for downloading data and generating labels using Snorkel. The second file, imdb-withsnorkel-labels.ipynb, contains the code that trains models with and without the additional labeled data. If running the code, then it is best to run all the code in the snorkel-labeling.ipynb file first so that all the labeled data files are generated. The dataset is available in one compressed archive and can be downloaded and expanded like so, as shown in snorkel-labeling.ipynb: (tf24nlp) $ wget v1.tar.gz (tf24nlp) $ tar xvzf aclImdb_v1.tar.gz This expands the archive in the aclImdb directory. The training and unsupervised data is in the train/ subdirectory while the testing data is in the test/ subdirectory. There are additional files, but they can be ignored. Figure 8.2 below shows the directory structure: Figure 8.2: Directory structure for the IMDb data Reviews are stored as individual text files inside the leaf directories. Each file is named using the format _.txt. Review identifiers are sequentially numbered from 0 to 24999 for training and testing examples. For the unsupervised data, the highest review number is 49999. [ 291 ] Weakly Supervised Learning for Classification with Snorkel The rating is a number between 0 and 9 and has meaning only in the test and training data. This number reflects the actual rating given to a certain review. The sentiment of all reviews in the pos/ subdirectory is positive. The sentiment of reviews in the neg/ subdirectory is negative. Ratings of 0 to 4 are considered negative, while ratings between 5 and 9 inclusive are considered positive. In this particular example, we do not use the actual rating and only consider the overall sentiment. We load the data into pandas DataFrames for ease of processing. A convenience function is defined to load reviews from a subdirectory into a DataFrame: def load_reviews(path, columns=["filename", 'review']): assert len(columns) == 2 l = list() for filename in glob.glob(path): # print(filename) with open(filename, 'r') as f: review = f.read() l.append((filename, review)) return pd.DataFrame(l, columns=columns) The method above loads the data into two columns – one for the name of the file and one for the text of the file. Using this method, the unsupervised dataset is loaded: unsup_df = load_reviews("./aclImdb/train/unsup/*.txt") unsup_df.describe() filename review count 50000 50000 unique 50000 49507 top ./aclImdb/train/ unsup/24211_0.txt Am not from America, I usually watch this show... freq 1 5 A slightly different method is used for the training and testing datasets: [ 292 ] Chapter 8 def load_labelled_data(path, neg='/neg/', pos='/pos/', shuffle=True): neg_df = load_reviews(path + neg + "*.txt") pos_df = load_reviews(path + pos + "*.txt") neg_df['sentiment'] = 0 pos_df['sentiment'] = 1 df = pd.concat([neg_df, pos_df], axis=0) if shuffle: df = df.sample(frac=1, random_state=42) return df This method returns three columns – the file name, the text of the review, and a sentiment label. The sentiment label is 0 if the sentiment is negative and 1 if the sentiment is positive, as determined by the directory the review is found in. The training dataset can now be loaded in like so: train_df = load_labelled_data("./aclImdb/train/") train_df.head() filename review sentiment 6868 ./aclImdb/ train// neg/6326_4.txt If you're in the mood for some dopey light ent... 0 11516 ./aclImdb/ train// pos/11177_8.txt *****Spoilers herein***** What real... 1 9668 ./aclImdb/ train// neg/2172_2.txt Bottom of the barrel, unimaginative, and pract... 0 1140 ./aclImdb/ train// pos/2065_7.txt Fearful Symmetry is a pleasant episode with a ... 1 1518 ./aclImdb/ train// pos/7147_10.txt I found the storyline in this movie to be very... 1 [ 293 ] Weakly Supervised Learning for Classification with Snorkel While we don't use the raw scores for the sentiment analysis, it is a good exercise for you to try predicting the score instead of the sentiment on your own. To help with processing the score from the raw files, the following code can be used, which extracts the scores from the file names: def fn_to_score(f): scr = f.split("/")[-1] # get file name scr = scr.split(".")[0] # remove extension scr = int (scr.split("_")[-1]) #the score return scr train_df['score'] = train_df.filename.apply(fn_to_ score) This adds a new score column to the DataFrame, which can be used as a starting point. The testing data can be loaded using the same convenience function by passing a different starting data directory. test_df = load_labelled_data("./aclImdb/test/") Once the reviews are loaded in, the next step is to create a tokenizer. Learning a subword tokenizer A subword tokenizer can be learned using the tensorflow_datasets package. Note that we want to pass all the training and unsupervised reviews while learning this tokenizer. text = unsup_df.review.to_list() + train_df.review.to_list() This step creates a list of 75,000 items. If the text of the reviews is inspected, there are some HTML tags in the reviews as they were scraped from the IMDb website. We use the Beautiful Soup package to clean these tags. txt = [ BeautifulSoup(x).text for x in text ] Then, we learn the vocabulary with 8,266 entries. encoder = tfds.features.text.SubwordTextEncoder.\ build_from_corpus(txt, target_vocab_size=2**13) encoder.save_to_file("imdb") [ 294 ] Chapter 8 This encoder is saved to disk. Learning the vocabulary can be a time-consuming task and needs to be done only once. Saving it to disk saves effort on subsequent runs of the code. A pre-trained subword encoder is supplied. It can be found in the GitHub folder corresponding to this chapter and is titled imdb. subwords in case you want to skip these steps. Before we jump into a model using data labeled with Snorkel, let us define a baseline model so that we can compare the performance of the models before and after the addition of weakly supervised labels. A BiLSTM baseline model To understand the impact of additional labeled data on model performance, we need a point of comparison. So, we set up a BiLSTM model that we have seen previously as the baseline. There are a few steps of data processing, like tokenizing, vectorization, and padding/truncating the lengths of the data. Since this is code we have seen before in Chapter 3 and 4, it is replicated here for completeness with concise descriptions. Snorkel is effective when the training data size is 10x to 50x the original. IMDb provides 50,000 unlabeled examples. If all these were labeled, then the training data would be 3x the original, which is not enough to show the value of Snorkel. Consequently, we simulate an ~18x ratio by limiting the training data to only 2,000 records. The rest of the training records are treated as unlabeled data, and Snorkel is used to supply noisy labels. To prevent the leakage of labels, we split the training data and store two separate DataFrames. The code for this split can be found in the snorkel-labeling.ipynb notebook. The code fragment used to generate the split is shown below: from sklearn.model_selection import train_test_split # Randomly split training into 2k / 23k sets train_2k, train_23k = train_test_split(train_df, test_size=23000, random_state=42, stratify=train_df.sentiment) train_2k.to_pickle("train_2k.df") [ 295 ] Weakly Supervised Learning for Classification with Snorkel A stratified split is used to ensure an equal number of positive and negative labels are sampled. A DataFrame with 2,000 records is saved. This DataFrame is used for training the baseline. Note that this may look like a contrived example but remember that the key feature of text data is that there is a lot of it; however, labels are scarce. Often the main barrier to labeling is the amount of effort required to label more data. Before we see how to label large amounts of data, let's complete training the baseline model for comparison. Tokenization and vectorizing data We tokenize all reviews in the training set and truncate/pad to a maximum of 150 tokens. Reviews are passed through Beautiful Soup to remove any HTML markup. All the code for this section can be found in the section titled Training Data Vectorization in the imdb-with-snorkel-labels.ipynb file. Only the specific pieces of code are shown here for brevity: # we need a sample of 2000 reviews for training num_recs = 2000 train_small = pd.read_pickle("train_2k.df") # we dont need the snorkel column train_small = train_small.drop(columns=['snorkel']) # remove markup cleaned_reviews = train_small.review.apply(lambda x: BeautifulSoup(x). text) # convert pandas DF in to tf.Dataset train = tf.data.Dataset.from_tensor_slices( (cleaned_reviews.values, train_small.sentiment.values)) Tokenization and vectorization are done through helper functions and applied over the dataset: # transformation functions to be used with the dataset from tensorflow.keras. pre-processing import sequence def encode_pad_transform(sample): encoded = imdb_encoder.encode(sample.numpy()) pad = sequence.pad_sequences([encoded], padding='post', maxlen=150) return np.array(pad[0], dtype=np.int64) def encode_tf_fn(sample, label): [ 296 ] Chapter 8 encoded = tf.py_function(encode_pad_transform, inp=[sample], Tout=(tf.int64)) encoded.set_shape([None]) label.set_shape([]) return encoded, label encoded_train = train.map(encode_tf_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) The test data is also processed similarly: # remove markup cleaned_reviews = test_df.review.apply( lambda x: BeautifulSoup(x).text) # convert pandas DF in to tf.Dataset test = tf.data.Dataset.from_tensor_slices((cleaned_reviews.values, test_df.sentiment.values)) encoded_test = test.map(encode_tf_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) Once the data is ready, the next step is setting up the model. Training using a BiLSTM model The code for creating and training the baseline is in the Baseline Model section of the notebook. A modestly sized model is created as the focus is on showing the gains from unsupervised labeling as opposed to model complexity. Plus, a smaller model trains faster and allows more iteration: # Length of the vocabulary vocab_size = imdb_encoder.vocab_size # Number of RNN units rnn_units = 64 # Embedding size embedding_dim = 64 #batch size BATCH_SIZE=100 [ 297 ] Weakly Supervised Learning for Classification with Snorkel The model uses a small 64-dimensional embedding and RNN units. The function for creating the model is below: from tensorflow.keras.layers import Embedding, LSTM, \ Bidirectional, Dense,\ Dropout dropout=0.5 def build_model_bilstm(vocab_size, embedding_dim, rnn_units, batch_ size, dropout=0.): model = tf.keras.Sequential([ Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None]), Bidirectional(LSTM(rnn_units, return_sequences=True)), Bidirectional(tf.keras.layers.LSTM(rnn_units)), Dense(rnn_units, activation='relu'), Dropout(dropout), Dense(1, activation='sigmoid') ]) return model A modest amount of dropout is added to have the model generalize better. This model has about 700K parameters. bilstm = build_model_bilstm( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, batch_size=BATCH_SIZE) bilstm.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_4 (Embedding) (100, None, 64) 529024 _________________________________________________________________ bidirectional_8 (Bidirection (100, None, 128) 66048 _________________________________________________________________ bidirectional_9 (Bidirection (100, 128) 98816 [ 298 ] Chapter 8 _________________________________________________________________ dense_6 (Dense) (100, 64) 8256 _________________________________________________________________ dropout_6 (Dropout) (100, 64) 0 _________________________________________________________________ dense_7 (Dense) (100, 1) 65 ================================================================= Total params: 702,209 Trainable params: 702,209 Non-trainable params: 0 _________________________________________________________________ The model is compiled with a binary cross-entropy loss function and the ADAM optimizer. Accuracy, precision, and recall metrics are tracked. This model is trained for 15 epochs and it can be seen that the model is saturated: bilstm.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'Precision', 'Recall']) encoded_train_batched = encoded_train.shuffle(num_recs, seed=42).\ batch(BATCH_SIZE) bilstm.fit(encoded_train_batched, epochs=15) Train for 15 steps Epoch 1/15 20/20 [==============================] - accuracy: 0.4795 - Precision: 0.4833 … Epoch 15/15 20/20 [==============================] accuracy: 0.9995 - Precision: 0.9990 - - 16s 793ms/step - loss: 0.6943 - Recall: 0.5940 - 4s 206ms/step - loss: 0.0044 Recall: 1.0000 As we can see, the model is overfitting to the small training set even after dropout regularization. [ 299 ] Weakly Supervised Learning for Classification with Snorkel Batch-and-Shuffle or Shuffle-and-Batch Note the second line of code in the fragment above, which shuffles and batches the data. The data is shuffled and then batched. Shuffling data between epochs is a form of regularization and enables the model to learn better. Shuffling before batching is a key point to remember in TensorFlow. If data is batched before shuffling, then only the order of the batches will be moved around when being fed to the model. However, the composition of each batch remains the same across epochs. By shuffling before batching, we ensure each batch looks different in each epoch. You are encouraged to train with and without shuffled data. While shuffling increases training time slightly, it gives better performance on the test set. Let us see how this model does on the test data: bilstm.evaluate(encoded_test.batch(BATCH_SIZE)) 250/250 [==============================] - 33s 134ms/step - loss: 2.1440 - accuracy: 0.7591 - precision: 0.7455 - recall: 0.7866 The model has 75.9% accuracy. The precision of the model is higher than the recall. Now that we have a baseline, we can see if weakly supervised labeling helps improve model performance. That is the focus of the next section. Weakly supervised labeling with Snorkel The IMDb dataset has 50,000 unlabeled reviews. This is double the size of the training set, which has 25,000 labeled reviews. As explained in the previous section, we have reserved 23,000 records from the training data in addition to the unsupervised set for weakly supervised labeling. Labeling records in Snorkel is performed via labeling functions. Each labeling function can return one of the possible labels of abstain from labeling. Since this is a binary classification problem, corresponding constants are defined. A sample labeling function is also shown. All the code for this section can be found in the notebook titled snorkel-labeling.ipynb: POSITIVE = 1 NEGATIVE = 0 ABSTAIN = -1 from snorkel.labeling.lf import labeling_function [ 300 ] Chapter 8 @labeling_function() def time_waste(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "time waste" ex2 = "waste of time" if ex1 in x.review.lower() or ex2 in x.review.lower(): return NEGATIVE return ABSTAIN Labeling functions are annotated with a labeling_function() provided Snorkel. Note that the Snorkel library needs to be installed. Detailed instructions can be found on GitHub in this chapter's subdirectory. In short, Snorkel can be installed by: (tf24nlp) $ pip install snorkel==0.9.5 Any warnings you see can be safely ignored as the library uses different versions of components such as TensorBoard. To be doubly sure, you can create a separate conda/virtual environment for Snorkel and its dependencies. This chapter would not have been possible without the support of the Snorkel.ai team. Frederic Sala and Alexander Ratner from Snorkel.ai were instrumental in providing guidance and the script for hyperparameter tuning to get the most out of Snorkel. Coming back to the labeling function, the function above is expecting a row from a DataFrame. It is expecting that the row has a text "review" column. This function tries to see if the review states that the movie or show was a waste of time. If so, it returns a negative label; else, it abstains from labeling the row of data. Note that we are trying to label thousands of rows of data in a short time using these labeling functions. The best way to do this is to print some random samples of positive and negative reviews and use some words from the text as labeling functions. The central idea here is to create a number of functions that have good accuracy for a subset of the rows. Let's examine some negative reviews in the training set to see what labeling functions can be created: neg = train_df[train_df.sentiment==0].sample(n=5, random_state=42) for x in neg.review.tolist(): print(x) [ 301 ] Weakly Supervised Learning for Classification with Snorkel One of the reviews starts off as "A very cheesy and dull road movie," which gives an idea for a labeling function: @labeling_function() def cheesy_dull(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "cheesy" ex2 = "dull" if ex1 in x.review.lower() or ex2 in x.review.lower(): return NEGATIVE return ABSTAIN There are a number of different words that occur in negative reviews. Here is a subset of negative labeling functions. The full list is in the notebook: @labeling_function() def garbage(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "garbage" if ex1 in x.review.lower(): return NEGATIVE return ABSTAIN @labeling_function() def terrible(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "terrible" if ex1 in x.review.lower(): return NEGATIVE return ABSTAIN @labeling_function() def unsatisfied(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "unsatisf" # unsatisfactory, unsatisfied if ex1 in x.review.lower(): return NEGATIVE return ABSTAIN [ 302 ] Chapter 8 All the negative labeling functions are added to a list: neg_lfs = [atrocious, terrible, piece_of, woefully_miscast, bad_acting, cheesy_dull, disappoint, crap, garbage, unsatisfied, ridiculous] Examining a sample of negative reviews can give us many ideas. Typically, a small amount of effort from a domain expert can yield multiple labeling functions that can be implemented easily. If you have ever watched a movie, you are an expert as far as this dataset is concerned. Examining a sample of positive reviews results in more labeling functions. Here is a sample of labeling functions that identify positive sentiment in reviews: import re @labeling_function() def classic(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "a classic" if ex1 in x.review.lower(): return POSITIVE return ABSTAIN @labeling_function() def great_direction(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "(great|awesome|amazing|fantastic|excellent) direction" if re.search(ex1, x.review.lower()): return POSITIVE return ABSTAIN @labeling_function() def great_story(x): if not isinstance(x.review, str): return ABSTAIN ex1 = "(great|awesome|amazing|fantastic|excellent|dramatic) (script|story)" if re.search(ex1, x.review.lower()): return POSITIVE return ABSTAIN [ 303 ] Weakly Supervised Learning for Classification with Snorkel All of the positive labeling functions can be seen in the notebook. Similar to the negative functions, a list of the positive labeling functions is defined: pos_lfs = [classic, must_watch, oscar, love, great_entertainment, very_entertaining, amazing, brilliant, fantastic, awesome, great_acting, great_direction, great_story, favourite] # set of labeling functions lfs = neg_lfs + pos_lfs The development of labeling is an iterative process. Don't be intimidated by the number of labeling functions shown here. You can see that they are quite simple, for the most part. To help you understand the amount of effort, I spent a total of 3 hours on creating and testing labeling functions: Note that the notebook contains a large number of simple labeling functions, of which only a subset are shown here. Please refer to the actual code for all the labeling functions. The process involved looking at some samples and creating the labeling functions, followed by evaluating the results on a subset of the data. Checking out examples of where the labeling functions disagreed with the labeled examples was very useful in making functions narrower or adding compensating functions. So, let's see how we can evaluate these functions so we can iterate on them. Iterating on labeling functions Once a set of labeling functions are defined, they can be applied to a pandas DataFrame, and a model can be trained to compute the weights assigned to various labeling functions while computing the labels. Snorkel provides functions that help with these tasks. First, let us apply these labeling functions to compute a matrix. This matrix has as many columns as there are labeling functions for every row of data: # let's take a sample of 100 records from training set lf_train = train_df.sample(n=1000, random_state=42) from snorkel.labeling.model import LabelModel from snorkel.labeling import PandasLFApplier [ 304 ] Chapter 8 # Apply the LFs to the unlabeled training data applier = PandasLFApplier(lfs) L_train = applier.apply(lf_train) In the code above, a sample of 1000 rows of data from the training data is extracted. Then, the list of all labeling functions created previously is passed to Snorkel and applied to this sample of training data. If we created 25 labeling functions, the shape of L_train would be (1000, 25). Each column represents the output of a labeling function. A generative model can now be trained on this label matrix: # Train the label model and compute the training labels label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train, n_epochs=500, log_freq=50, seed=123) lf_train["snorkel"] = label_model.predict(L=L_train, tie_break_policy="abstain") A LabelModel instance is created with a parameter specifying how many labels are in the actual model. This model is then trained, and labels are predicted for the subset of data. These predicted labels are added as a new column to the DataFrame. Note the tie_break_policy parameter being passed into the predict() method. In case the model has conflicting outputs from labeling functions, and they have the same scores from the model, this parameter specifies how the conflict should be resolved. Here, we instruct the model to abstain from labeling the records in case of a conflict. Another possible setting is "random," where the model will randomly assign the output from one of the tied labeling functions. The main difference between these two options, in the context of the problem at hand, is precision. By asking the model to abstain from labeling, we get higher precision results, but fewer records will be labeled. Randomly choosing one of the functions that were tied results in higher coverage, but presumably at lower quality. This hypothesis can be tested by training the same model with the outputs of the two options separately. You are encouraged to try these options and see the results for yourself. Since the abstain policy was chosen, all of the 1000 rows may not have been labeled: pred_lfs = lf_train[lf_train.snorkel > -1] pred_lfs.describe() count sentiment score snorkel 598.000000 598.000000 598.000000 [ 305 ] Weakly Supervised Learning for Classification with Snorkel Out of 1000 records, only 458 were labeled. Let's check how many of these were labeled incorrectly: pred_mistake = pred_lfs[pred_lfs.sentiment != pred_lfs.snorkel] pred_mistake.describe() count sentiment score snorkel 164.000000 164.000000 164.000000 Snorkel, armed with our labeling functions, labeled 598 records, out of which 434 labels were correct and 164 records were incorrectly labeled. The label model has an accuracy of ~72.6%. To get inspiration for more labeling functions, you should inspect a few of the rows where the label model produced the wrong results and update or add labeling functions. As mentioned above, a total of approximately 3 hours was spent on iterating and creating labeling functions to get a total of 25 functions. To get more out of Snorkel, we need to increase the amount of training data. The objective is to develop a method that gets us many labels quickly, without a lot of manual effort. One technique that can be used in this specific case is training a simple Naïve-Bayes model to get words that are highly correlated with positive or negative labels. This is the focus of the next section. Naïve-Bayes (NB) is a basic technique covered in many basic NLP books. Naïve-Bayes model for finding keywords Building an NB model on this dataset takes under an hour and has the potential to significantly increase the quality and coverage of the labeling functions. The core model code for the NB model can be found in the spam-inspired-technique-naivebayes.ipynb notebook. Note that these explorations are aside from the main labeling code, and this section can be skipped if desired, as the learnings from this section are applied to construct better labeling functions outlined in the snorkel-labeling. ipynb notebook. The main flow of the NB-based exploration is to load the reviews, remove stop words, take the top 2,000 words to construct a simple vectorization scheme, and train an NB model. Since data loading is the same as covered in previous sections, the details are skipped in this section. [ 306 ] Chapter 8 This section uses the NLTK and wordcloud Python packages. NLTK should already be installed as we have used it in Chapter 1, Essentials of NLP. wordcloud can be installed with: (tf24nlp) $ pip install wordcloud==1.8 Word clouds help get an aggregate understanding of the positive and negative review text. Note that counters are required for the top-2000 word vectorization scheme. A convenience function that cleans HTML text along with removing stop words and tokenizing the rest into a list is defined as follows: en_stopw = set(stopwords.words("english")) def get_words(review, words, stopw=en_stopw): review = BeautifulSoup(review).text review = re.sub('[^A-Za-z]', ' ', review) review = review.lower() # remove HTML tags # remove non letters tok_rev = wt(review) rev_word = [word for word in tok_rev if word not in stopw] words += rev_word Then, the positive reviews are separated and a word cloud is generated for visualization purposes: pos_rev = train_df[train_df.sentiment == 1] pos_words = [] pos_rev.review.apply(get_words, args=(pos_words,)) from wordcloud import WordCloud import matplotlib.pyplot as plt pos_words_sen = " ".join(pos_words) pos_wc = WordCloud(width = 600,height = 512).generate(pos_words_sen) plt.figure(figsize = (12, 8), facecolor = 'k') plt.imshow(pos_wc) plt.axis('off') plt.tight_layout(pad = 0) plt.show() [ 307 ] Weakly Supervised Learning for Classification with Snorkel The output of the preceding code is shown in Figure 8.3: Figure 8.3: Positive reviews word cloud [ 308 ] Chapter 8 It is not surprising that movie and film are the biggest words. However, there are a number of other suggestions for keywords that can be seen here. Similarly, a word cloud for the negative reviews can be generated, as shown in Figure 8.4: Figure 8.4: Negative reviews word cloud [ 309 ] Weakly Supervised Learning for Classification with Snorkel These visualizations are interesting; however, a clearer picture will emerge after training the model. Only the top 2,000 words are needed for training the model: from collections import Counter pos = Counter(pos_words) neg = Counter(neg_words) # let's try to build a naive bayes model for sentiment classification tot_words = pos + neg tot_words.most_common(10) [('movie', 44031), ('film', 40147), ('one', 26788), ('like', 20274), ('good', 15140), ('time', 12724), ('even', 12646), ('would', 12436), ('story', 11983), ('really', 11736)] Combined counters show the top 10 most frequently appearing words in all reviews. These are extracted into a list: top2k = [x for (x, y) in tot_words.most_common(2000)] The vectorization of each review is fairly simple – each of the 2000 words becomes a column for a given review. If the word represented by the column is present in the review, the value of the column is marked as 1 for that review, or 0 otherwise. So, each review is represented by a sequence of 0s and 1s representing which of the top 2000 words the review contained. The code below shows this transformation: def featurize(review, topk=top2k, stopw=en_stopw): review = BeautifulSoup(review).text # remove HTML tags review = re.sub('[^A-Za-z]', ' ', review) # remove nonletters review = review.lower() tok_rev = wt(review) rev_word = [word for word in tok_rev if word not in stopw] features = {} for word in top2k: features['contains({})'.format(word)] = (word in rev_word) return features [ 310 ] Chapter 8 train = [(featurize(rev), senti) for (rev, senti) in zip(train_df.review, train_df.sentiment)] Training the model is quite trivial. Note that the Bernoulli NB model is used here as each word is represented according to its presence or absence in the review. Alternatively, the frequency of the word in the review could also be used. If the frequency of the word is used while vectorizing the review above, then the multinomial form of NB should be used. NLTK also provides a way to inspect the most informative features: classifier = nltk.NaiveBayesClassifier.train(train) # 0: negative sentiment, 1: positive sentiment classifier.show_most_informative_features(20) Most Informative Features contains(unfunny) = contains(waste) = contains(pointless) = contains(redeeming) = contains(laughable) = contains(worst) = contains(awful) = contains(poorly) = contains(wonderfully) = contains(sucks) = contains(lame) = contains(pathetic) = contains(delightful) = contains(wasted) = contains(crap) = contains(beautifully) = contains(dreadful) = contains(mess) = contains(horrible) = contains(superb) = contains(garbage) = contains(badly) = contains(wooden) = contains(touching) = contains(terrible) = True True True True True True True True True True True True True True True True True True True True True True True True True 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 [ 311 ] : : : : : : : : : : : : : : : : : : : : : : : : : 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 1 0 1 = = = = = = = = = = = = = = = = = = = = = = = = = 14.1 12.7 10.4 10.1 9.3 9.0 8.4 8.2 7.6 7.0 6.9 6.4 6.0 6.0 5.9 5.8 5.7 5.6 5.5 5.4 5.3 5.3 5.2 5.1 Weakly Supervised Learning for Classification with Snorkel This whole exercise was done to find which words are most useful in predicting negative and positive reviews. The table above shows the words and the likelihood ratios. Taking the first row of the output for the word unfunny as an example, the model is saying that reviews containing unfunny are negative 14.1 times more often than they are positive. The labeling functions are updated using a number of these keywords. Upon analyzing the labels assigned by the labeling functions in snorkel-labeling. ipynb, it can be seen that more negative reviews are being labeled as compared to positive reviews. Consequently, the labeling functions use a larger list of words for positive labels as compared to negative labels. Note that imbalanced datasets have issues with overall training accuracy and specifically with recall. The following code fragment shows augmented labeling functions using the keywords discovered through NB above: # Some positive high prob words - arbitrary cutoff of 4.5x ''' contains(wonderfully) = True 1 : 0 = 7.6 contains(delightful) = True 1 : 0 = 6.0 contains(beautifully) = True 1 : 0 = 5.8 contains(superb) = True 1 : 0 = 5.4 contains(touching) = True 1 : 0 = 5.1 contains(brilliantly) = True 1 : 0 = 4.7 contains(friendship) = True 1 : 0 = 4.6 contains(finest) = True 1 : 0 = 4.5 contains(terrific) = True 1 : 0 = 4.5 contains(gem) = True 1 : 0 = 4.5 contains(magnificent) = True 1 : 0 = 4.5 ''' : : : : : : : : : : : 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 wonderfully_kw = make_keyword_lf(keywords=["wonderfully"], label=POSITIVE) delightful_kw = make_keyword_lf(keywords=["delightful"], label=POSITIVE) superb_kw = make_keyword_lf(keywords=["superb"], label=POSITIVE) pos_words = ["beautifully", "touching", "brilliantly", "friendship", "finest", "terrific", "magnificent"] pos_nb_kw = make_keyword_lf(keywords=pos_words, label=POSITIVE) @labeling_function() def superlatives(x): if not isinstance(x.review, str): [ 312 ] Chapter 8 return ABSTAIN ex1 = ["best", "super", "great","awesome","amaz", "fantastic", "excellent", "favorite"] pos_words = ["beautifully", "touching", "brilliantly", "friendship", "finest", "terrific", "magnificent", "wonderfully", "delightful"] ex1 += pos_words rv = x.review.lower() counts = [rv.count(x) for x in ex1] if sum(counts) >= 3: return POSITIVE return ABSTAIN Since keyword-based labeling functions are quite common, Snorkel provides an easy way to define such functions. The following code fragment uses two programmatic ways of converting a list of words into a set of labeling functions: # Utilities for defining keywords based functions def keyword_lookup(x, keywords, label): if any(word in x.review.lower() for word in keywords): return label return ABSTAIN def make_keyword_lf(keywords, label): return LabelingFunction( name=f"keyword_{keywords[0]}", f=keyword_lookup, resources=dict(keywords=keywords, label=label), ) The first function does the simple matching and returns the specific label, or it abstains. Check out the snorkel-labeling.ipynb file for the full list of labeling functions that were iteratively developed. All in all, I spent approximately 12-14 hours on labeling functions and investigations. Before we try to train the model using this data, let us evaluate the accuracy of this model on the entire training data set. [ 313 ] Weakly Supervised Learning for Classification with Snorkel Evaluating weakly supervised labels on the training set We apply the labeling functions and train a model on the entire training dataset just to evaluate the quality of this model: L_train_full = applier.apply(train_df) label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train_full, n_epochs=500, log_freq=50, seed=123) metrics = label_model.score(L=L_train_full, Y=train_df.sentiment, tie_break_policy="abstain", metrics=["accuracy", "coverage", "precision", "recall", "f1"]) print("All Metrics: ", metrics) Label Model Accuracy: 78.5% All Metrics: {'accuracy': 0.7854110013835218, 'coverage': 0.83844, 'precision': 0.8564883605745418, 'recall': 0.6744344773790951, 'f1': 0.7546367008509709} Our set of labeling functions covers 83.4% of the 25,000 training records, with 85.6% correct labels. Snorkel provides the ability to analyze the performance of each labeling function: from snorkel.labeling import LFAnalysis LFAnalysis(L=L_train_full, lfs=lfs).lf_summary() atrocious terrible piece_of woefully_miscast bad_acting cheesy_dull bad keyword_waste keyword_pointless keyword_redeeming keyword_laughable negatives j Polarity 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 [0] 16 [0] Coverage 0.00816 0.05356 0.00084 0.00848 0.08748 0.05136 0.03624 0.07336 0.01956 0.01264 0.41036 0.35300 [ 314 ] Overlaps 0.00768 0.05356 0.00080 0.00764 0.08348 0.04932 0.03624 0.06848 0.01836 0.01192 0.37368 0.34720 Conflicts 0.00328 0.02696 0.00048 0.00504 0.04304 0.02760 0.01744 0.03232 0.00972 0.00556 0.20884 0.17396 Chapter 8 classic must_watch oscar love great_entertainment very_entertaining amazing great keyword_wonderfully keyword_delightful keyword_superb keyword_beautifully superlatives keyword_remarkable 17 18 19 20 21 22 23 31 32 33 34 35 36 37 [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] 0.01684 0.00176 0.00064 0.08660 0.00488 0.00544 0.05028 0.27728 0.01248 0.01188 0.02948 0.08284 0.14656 0.32052 0.01476 0.00140 0.00060 0.07536 0.00488 0.00460 0.04516 0.23568 0.01248 0.01100 0.02636 0.07428 0.14464 0.26004 0.00856 0.00060 0.00016 0.04568 0.00292 0.00244 0.02340 0.13800 0.00564 0.00500 0.01220 0.03528 0.07064 0.14748 Note that a snipped version of the output has been presented here. The full output is available in the notebook. For each labeling function, the table presents what labels are produced and the coverage of the function – that is, the fraction of records it provides a label for, the fraction where it overlaps with another function producing the same label, and the fraction where it conflicts with another function producing a different label. A positive and a negative label function are highlighted. The bad_ acting() function covers 8.7% of the records but overlaps with other functions about 8.3% of the time. However, it conflicts with a function producing a positive label about 4.3% of the time. The amazing() function covers about 5% of the dataset. It conflicts about 2.3% of the time. This data can be used to fine-tune specific functions further and examine how we've separated the data. Figure 8.5 shows the balance between positive, negative, and abstain labels: Figure 8.5: Distribution of labels generated by Snorkel [ 315 ] Weakly Supervised Learning for Classification with Snorkel Snorkel has several options for hyperparameter tuning to improve the quality of labeling even further. We execute a grid search over the parameters to find the best training parameters, while we exclude the labeling functions that are adding noise in the final output. Hyperparameter tuning is done via choosing different learning rates, L2 regularizations, numbers of epochs to run training on, and optimizers to use. Finally, a threshold is used to determine which labeling functions should be kept for the actual labeling task: # Grid Search from itertools import product lrs = [1e-1, 1e-2, 1e-3] l2s = [0, 1e-1, 1e-2] n_epochs = [100, 200, 500] optimizer = ["sgd", "adam"] thresh = [0.8, 0.9] lma_best = 0 params_best = [] for params in product(lrs, l2s, n_epochs, optimizer, thresh): # do the initial pass to access the accuracies label_model.fit(L_train_full, n_epochs=params[2], log_freq=50, seed=123, optimizer=params[3], lr=params[0], l2=params[1]) # accuracies weights = label_model.get_weights() # LFs above our threshold vals = weights > params[4] # the LM requires at least 3 LFs to train if sum(vals) >= 3: L_filtered = L_train_full[:, vals] label_model.fit(L_filtered, n_epochs=params[2], log_freq=50, seed=123, optimizer=params[3], lr=params[0], l2=params[1]) label_model_acc = label_model.score(L=L_filtered, [ 316 ] Chapter 8 Y=train_df.sentiment, tie_break_policy="abstain")["accuracy"] if label_model_acc > lma_best: lma_best = label_model_acc params_best = params print("best = ", lma_best, " params ", params_best) Snorkel may print a warning that metrics are being calculated over non-abstain labels only. This is by design, as we are interested in high-confidence labels. If there is a conflict between labeling functions, then our model abstains from giving it a label. The best parameters printed out are: best = 0.8399649430324277 params (0.001, 0.1, 200, 'adam', 0.9) Through this tuning, the accuracy of the model improved from 78.5% to 84%! Using these parameters, we label the 23k records from the training set and 50k records from the unsupervised set. For the first part, we label all the 25k training records and then split them into two sets. This particular part of splitting was referenced in the baseline model section above: train_df["snorkel"] = label_model.predict(L=L_filtered, tie_break_policy="abstain") from sklearn.model_selection import train_test_split # Randomly split training into 2k / 23k sets train_2k, train_23k = train_test_split(train_df, test_size=23000, random_state=42, stratify=train_df.sentiment) train_23k.snorkel.hist() train_23k.sentiment.hist() [ 317 ] Weakly Supervised Learning for Classification with Snorkel The last two lines of code inspect the state of the labels and contrasts with actual labels and generate the graph shown in Figure 8.6: Figure 8.6: Comparison of labels in the training set versus labels generated using Snorkel When the Snorkel model abstains from labeling, it assigns -1 for the label. We see that the model is able to label a lot more negative reviews than positive labels. We filter out the rows where Snorkel abstained from labeling and saved the records: lbl_train = train_23k[train_23k.snorkel > -1] lbl_train = lbl_train.drop(columns=["sentiment"]) p_sup = lbl_train.rename(columns={"snorkel": "sentiment"}) p_sup.to_pickle("snorkel_train_labeled.df") However, the key question that we face is that if we augmented the training data with these noisy labels, which are 84% accurate, would it make our model perform better or worse? Note that the baseline model had an accuracy of ~74%. To answer this question, we label the unsupervised set and then train the same model architecture as the baseline. [ 318 ] Chapter 8 Generating unsupervised labels for unlabeled data As we saw in the previous section, where we labeled the training data set, it is quite simple to run the model on the unlabeled reviews of the dataset: # Now apply this to all the unsupervised reviews # Apply the LFs to the unlabeled training data applier = PandasLFApplier(lfs) # now let's apply on the unsupervised dataset L_train_unsup = applier.apply(unsup_df) label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train_unsup[:, vals], n_epochs=params_best[2], optimizer=params_best[3], lr=params_best[0], l2=params_best[1], log_freq=100, seed=42) unsup_df["snorkel"] = label_model.predict(L=L_train_unsup[:, vals], tie_break_policy="abstain") # rename snorkel to sentiment & concat to the training dataset pred_unsup_lfs = unsup_df[unsup_df.snorkel > -1] p2 = pred_unsup_lfs.rename(columns={"snorkel": "sentiment"}) print(p2.info()) p2.to_pickle("snorkel-unsup-nbs.df") Now the label model is trained, and predictions are added to an additional column of the unsupervised dataset. The model labels 29,583 records out of 50,000. This is almost equal to the size of the training dataset. Assuming that the error rate on the unsupervised set is similar to that observed on the training set, we just added ~24,850 records with correct labels and ~4,733 records with incorrect labels into the training set. However, the balance of this dataset is very tilted, as positive label coverage is still poor. There are approximately 9,000 positive labels for over 20,000 negative labels. The Increase Positive Label Coverage section of the notebook tries to further improve the coverage of the positive labels by adding more keyword functions. [ 319 ] Weakly Supervised Learning for Classification with Snorkel This results in a slightly more balanced set, as shown in the following chart: Figure 8.7: Further improvements in labeling functions applied to the unsupervised dataset improves the positive labels This dataset is saved to disk for use during training: p3 = pred_unsup_lfs2.rename(columns={"snorkel2": "sentiment"}) print(p3.info()) p3.to_pickle("snorkel-unsup-nbs-v2.df") Labeled datasets are saved to disk and reloaded in the training code for better modularity and ease of readability. In a production pipeline, intermediate outputs may not be persisted and fed directly into the training steps. Another small consideration here is the separation of virtual/conda environments for running Snorkel. Having a separate script for weakly supervised labeling allows the use of a different Python environment as well. We switch our focus back to the imdb-with-snorkel-labels.ipynb notebook, which has the models for training. The code for this part begins from the section With Snorkel Labeled Data. The newly labeled records need to be loaded from disk, cleansed, vectorized, and padded before training can be run. We extract the labeled records and remove HTML markup, as shown below: # labelled version of training data split p1 = pd.read_pickle("snorkel_train_labeled.df") p2 = pd.read_pickle("snorkel-unsup-nbs-v2.df") [ 320 ] Chapter 8 p2 = p2.drop(columns=['snorkel']) # so that everything aligns # now concatenate the three DFs p2 = pd.concat([train_small, p1, p2]) # training plus snorkel # labelled data print("showing hist of additional data") # now balance the labels pos = p2[p2.sentiment == neg = p2[p2.sentiment == recs = min(pos.shape[0], pos = pos.sample(n=recs, neg = neg.sample(n=recs, 1] 0] neg.shape[0]) random_state=42) random_state=42) p3 = pd.concat((pos,neg)) p3.sentiment.hist() The original training dataset was balanced across positive and negative labels. However, there is an imbalance in the data labeled using Snorkel. We balance the dataset and ignore the excess rows with negative labels. Note that the 2,000 training records used in the baseline model also need to be added, resulting in a total of 33,914 training records. As mentioned before, it really shines when the amount of data is 10x to 50x the original dataset. Here, we achieve a ratio closer to 17x, or 18x if the 2,000 training records are also included. Figure 8.8: Distribution of records after using Snorkel and weak supervision [ 321 ] Weakly Supervised Learning for Classification with Snorkel As shown in Figure 8.8 above, the records in blue are dropped to balance the dataset. Next, the data needs to be cleansed and vectorized using the subword vocabulary: # remove markup cleaned_unsup_reviews = p3.review.apply( lambda x: BeautifulSoup(x).text) snorkel_reviews = pd.concat((cleaned_reviews, cleaned_unsup_reviews)) snorkel_labels = pd.concat((train_small.sentiment, p3.sentiment)) Finally, we convert the pandas DataFrames into TensorFlow data sets and vectorize and pad them: # convert pandas DF in to tf.Dataset snorkel_train = tf.data.Dataset.from_tensor_slices(( snorkel_reviews.values, snorkel_labels.values)) encoded_snorkel_train = snorkel_train.map(encode_tf_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) We are ready to try training our BiLSTM model to see if the performance improves on this task. Training BiLSTM on weakly supervised data from Snorkel To ensure we are comparing apples to apples, we use the same BiLSTM as the baseline model. We instantiate a model with 64-dimensional embeddings, 64 RNN units, and a batch size of 100. The model uses the binary cross-entropy loss and the Adam optimizer. Accuracy, precision, and recall are tracked as the model is trained. An important step is to shuffle the datasets every epoch to help the model keep errors to a minimum. This is an important concept. Deep models work on the assumption that the loss is a convex surface, and the gradient is descending to the bottom of this surface. The surface has many local minima or saddle points in reality. If the model gets stuck in local minima during a mini-batch, it will be hard for the model to come out of it as across epochs, it receives the same data points again and again. Shuffling the data changes the data set and the order in which the model receives it. This enables the model to learn better by getting out of these local minima faster. The code for this section is in the imdb-with-snorkel-labels.ipynb file: [ 322 ] Chapter 8 shuffle_size = snorkel_reviews.shape[0] // BATCH_SIZE * BATCH_SIZE encoded_snorkel_batched = encoded_snorkel_train.shuffle( buffer_size=shuffle_size, seed=42).batch(BATCH_SIZE, drop_remainder=True) Note that we cache all the records that will be part of the batch so that we can get perfect buffering. This comes at the cost of slightly slower training and higher memory use. Also, since our batch size is 100 and the dataset has 35,914 records, we drop the remainder of the records. We train the model for 20 epochs, a little more than the baseline model. The baseline model was overfitting at 15 epochs. So, it was not useful to train it longer. This model has a lot more data to train on. Consequently, it needs more epochs to learn: bilstm2.fit(encoded_snorkel_batched, epochs=20) Train for 359 steps Epoch 1/20 359/359 [==============================] - 92s 257ms/step - loss: 0.4399 - accuracy: 0.7860 - Precision: 0.7900 - Recall: 0.7793 … Epoch 20/20 359/359 [==============================] - 82s 227ms/step - loss: 0.0339 - accuracy: 0.9886 - Precision: 0.9879 - Recall: 0.9893 The model achieves an accuracy of 98.9%. The precision and recall numbers are quite close to each other. Evaluating the baseline model on the test data gave an accuracy score of 76.23%, which clearly proved that it was overfitting to the training data. Upon evaluating the model trained with weakly supervised labeling, the following results are obtained: bilstm2.evaluate(encoded_test.batch(BATCH_SIZE)) 250/250 [==============================] - 35s 139ms/step - loss: 1.9134 - accuracy: 0.7658 - precision: 0.7812 - recall: 0.7386 This model trained on weakly supervised noisy labels achieves 76.6% accuracy, which is 0.7%% higher than baseline mode. Also note that the precision went from 74.5% to 78.1% but recall decreased. In this toy setting, we kept a lot of the variables constant, such as model type, dropout ratio, etc. In a realistic setting, we can drive the accuracy even higher by optimizing the model architecture and hyperparameter tuning. There are other options to try. Recall that we instruct Snorkel to abstain from labeling if it is unsure. [ 323 ] Weakly Supervised Learning for Classification with Snorkel By changing that to a majority vote or some other policy, the amount of training data could be increased even further. You could also try and train on unbalanced datasets and see the impact. The focus here was on showing the value of weak supervision for massively increasing the amount of training data rather than building the best model. However, you should be able to take these lessons and apply them to your projects. It is important to take a moment and think about the causes of this result. There are a few important deep learning lessons hidden in this story. First, more labeled data is always good, given a model of sufficient complexity. There is a correlation between the amount of data and model capacity. Models with higher capacities can handle more complex relationships in the data. They also need much larger datasets to learn the complexities. However, if the model is kept a constant and with sufficient capacity, the quantity of labeled data makes a huge difference, as evidenced here. There are some limits to how much of an improvement we can achieve by increasing labeled data scale. In a paper titled Revisiting Unreasonable Effectiveness of Data in Deep Learning Era by Chen Sun et al., published at ICCV 2017, the authors examine the role of data in the computer vision domain. They report that the performance of models increases logarithmically with an increase in training data. The second result they report is that learning representations through pretraining helps downstream tasks quite a bit. Techniques in this chapter can be applied to generate more data for the fine-tuning step, which will significantly boost the performance of the fine-tuned model. The second lesson is one about the basics of machine learning – shuffling the training data set has a disproportionate impact on the performance of the model. In the book, we have not always done this in order to manage training times. For training production models, it is important to focus on basics such as shuffling data sets before each epoch. Let's review everything we learned in this chapter. Summary It is apparent that deep models perform very well when they have a lot of data. BERT and GPT models have shown the value of pre-training on massive amounts of data. It is still very hard to get good-quality labeled data for use in pretraining or fine-tuning. We used the concepts of weak supervision combined with generative models to cheaply label data. With relatively small amounts of effort, we were able to multiply the amount of training data by 18x. Even though the additional training data was noisy, the BiLSTM model was able to learn effectively and beat the baseline model by 0.6%. [ 324 ] Chapter 8 Representation learning or pre-training leads to transfer learning and fine-tuning models performing well on their downstream tasks. However, in many domains like medicine, the amount of labeled data may be small or quite expensive to acquire. Using the techniques learned in this chapter, the amount of training data can be expanded rapidly with little effort. Building a state-of-the-art- beating model helped recall some basic lessons in deep learning, such as how larger data boosts performance quite a bit, and that larger models are not always better. Now, we turn our focus to conversational AI. Building a conversational AI system is a very challenging task with many layers. The material covered so far in the book can help in building various parts of chatbots. The next chapter goes over the key parts of conversational AI or chatbot systems and outlines effective ways to build them. [ 325 ] 9 Building Conversational AI Applications with Deep Learning The art of conversation is considered a uniquely human trait. The ability of machines to have a dialog with humans has been a research topic for many years. Alan Turing proposed the now-famous Turing Test to see if a human could converse with another human and a machine through written messages, and identify each participant as machine or human correctly. In recent times, digital assistants such as Alexa by Amazon and Siri by Apple have made considerable strides in conversational AI. This chapter discusses different conversational agents and puts the techniques learned in the previous chapters into context. While there are several approaches to building conversational agents, we'll focus on the more recent deep learning approaches and cover the following topics: • Overview of conversational agents and their general architecture • An end-to-end pipeline for building a conversational agent • The architecture of different types of conversational agents, such as • Question-answering bots • Slot-filling or task-oriented bots • General conversation bots We'll start with an overview of the general architecture of conversational agents. [ 327 ] Building Conversational AI Applications with Deep Learning Overview of conversational agents A conversational agent interacts with people using speech or text. Facebook Messenger would be an example of a text-based agent while Alexa and Siri are examples of agents that interact through speech. In either case, the agent needs to understand the user's intent and respond accordingly. Hence, the core part of the agent would be a natural language understanding (NLU) module. This module would interface with a natural language generation (NLG) module to supply a response back to the user. Voice agents differ from text-based agents in having an additional module that converts voice to text and vice versa. We can imagine the system having the following logical structure for a voice-activated agent: Figure 9.1: Conceptual architecture of a conversational AI system The main difference between a speech-based system and a text-based system is how the users communicate with the system. All the other parts to the right of the Speech Recognition and Generation section shown in Figure 9.1 above are identical in both types of conversational AI systems. The user communicates with the agent using speech. The agent first converts speech to text. Many advancements have been made in the past few years in this area, and it is generally considered a solved problem for major languages like English. English is spoken in many countries across the globe, resulting in many different pronunciations and dialects. Consequently, companies like Apple develop various models for different accents, such as British English, Indian English, and Australian English. Figure 9.2 below shows some English and French accents from the Siri control panel on an iPhone 11 running iOS 13.6. French, German, and some other languages also have multiple variants. Another way to do this could be by putting an accent and language classification model as the first step and then processing the input through the appropriate speech recognition model: [ 328 ] Chapter 9 Figure 9.2: Language variants in Siri for speech recognition For virtual assistants, there are specific models for wake word detection. The model's objective is to start the bot once it detects a wake word or phrase such as "OK Google." The wake word triggers the bot to listen to the utterances until the conversation is completed. Once the user's speech has been converted into words, it is easy to apply to various NLP techniques that we have seen in multiple chapters in this book. The breakdown of the elements shown inside the NLP box in Figure 9.1 can be considered conceptual. Depending on the system and the task, these components may be different models or one end-to-end model. However, it is useful to think of the logical breakdown, as shown in the figure. Understanding the user's commands and the intent is a crucial part. Intent identification is essential for general-purpose systems like Amazon's Alexa or Apple's Siri, which serve multiple purposes. Specific dialogue management systems may be invoked based on the intent identified. The dialog management may invoke APIs provided by a fulfillment system. In a banking bot, the command may be to get the latest balances, and the fulfillment may be a banking system that retrieves the latest balance. The dialogue manager would process the balance and use an NLG system to convert the balance into a proper sentence. Note that some of these systems are built on rules-based systems and others use end-to-end deep learning. A question-answering system is an example of an end-to-end deep learning system where dialog management, and NLU are a single unit. [ 329 ] Building Conversational AI Applications with Deep Learning There are different types of conversational AI applications. The most common ones are: • Task-oriented or slot-filling systems • Question-answering • Machine reading comprehension • Social or chit-chat bots Each of these types is described in the following sections. Task-oriented or slot-filling systems Task-oriented systems are purpose-built to satisfy a specific task. Some examples of tasks are ordering a pizza, getting the latest balance of a bank account, calling a person, sending a text message, turning a light on, and so on. Most of the capabilities exposed by virtual assistants can be classified into this category. Once the user's intent has been identified, control is transferred to the model managing a specific intent to gather all the information to perform the task and manage the dialog with the user. NER and POS detection models form a crucial part of such systems. Imagine that the user needs to fill a form with some information, and the bot interacts with the user to find the required information to fulfill the task. Let's take the example of ordering a pizza. The table below shows a simplified example of the choices in this process: Size Crust Toppings Small Thin Cheese Medium Regular Jalapeno Take-out Large Deep dish Pineapple Delivery XL Gluten-free Pepperoni [ 330 ] Delivery Quantity 1 2 … Chapter 9 Here is a made-up example of a conversation with a bot: Figure 9.3: A possible pizza-ordering bot conversation The bot tracks the information needed and keeps marking the information it has received from the person as the conversation progresses. Once the bot has all the information needed to complete the task, it can execute the task. Note that some steps, such as confirming the order or the customer asking for options for toppings, have been excluded for brevity. [ 331 ] Building Conversational AI Applications with Deep Learning In today's world, solutions like Dialogflow, part of Google Cloud, and LUIS, part of Azure, simplify building such conversational agents to just the configuration. Let's see how a simple bot that implements a portion of the pizza-ordering task above can be implemented with Dialogflow. Note that this example has been kept small to simplify configuration and use the free tier of Dialogflow. The first step is to navigate to, which is the home page for this service. There are two version of Dialogflow – Essentials or ES, and CX. CX is the advanced version with a lot more features and controls. Essentials is a simplified version with a free tier that is perfect for a bot's trial build. Scroll down on the page so that you can see the Dialogflow Essentials section and click on the Go to console link, as shown in Figure 9.4 below: Figure 9.4: Dialogflow console access Clicking on the console may require the authorization of the service, and you may need to log in with your Google Cloud account. Alternatively, you may navigate to dialogflow.cloud.google.com/#/agents to see a list of configured agents. This screen is shown in Figure 9.5: [ 332 ] Chapter 9 Figure 9.5: Agents configuration in Dialogflow A new agent can be created by clicking on the blue CREATE AGENT button on the top right. If you see a different interface, please check that you are using Dialogflow Essentials. You can also use this URL to get to the agents section: https:// dialogflow.cloud.google.com/#/agents. This brings up the new agent configuration screen, shown in Figure 9.6: Figure 9.6: Creating a new agent [ 333 ] Building Conversational AI Applications with Deep Learning Please note that this is not a comprehensive tutorial of Dialogflow, so we will be using several default values to illustrate the concept of building slot-filling bots. Hitting CREATE will build a new bot and load a screen, as shown in Figure 9.7. The main part of building the bot is to define intent. The main intent of our bot is to order pizza. Before we create an intent, we will configure a few entities: Figure 9.7: A barebones agent ready for configuration These entities are the slots that the bot will fill out in conversation with the user. In this case, we will define two entities – the crust of the pizza and the size of the pizza. Click on the + sign next to Entities on the left in the previous screenshot, and you'll see the following screen: Figure 9.8: Configuring options for the crust entity in Dialogflow [ 334 ] Chapter 9 The values on the left represent the values for the crust entity, and the multiple options or synonyms on the right are the terms the user can input or speak corresponding to each choice. We will configure four options corresponding to the table above. Another entity will be created for the size of the pizza. The configured entity looks like Figure 9.9: Figure 9.9: Configuration of the size entity [ 335 ] Building Conversational AI Applications with Deep Learning Now we are ready to build the intent. Click on the + sign next to the Intents section on the left navigation bar. We will name this intent order, as this intent will get the options for crust and size from the user. First, we need to specify a set of training phrases that will trigger this intent. Some examples of such training phrases can be "I would like to order pizza" or "Can I get a pizza?". Figure 9.10 shows some of the configured training phrases for the intent: Figure 9.10: Training phrases that trigger the ordering intent There is a lot of hidden machine learning and deep learning happening in this picture, simplified by Dialogflow. For example, the platform can process text input as well as speech. These training examples are indicative, and the actual phrasing does not need to match any of these expressions directly. The next step is to define the parameters we need from the user. We add an action with two parameters – size and crust. Note that the ENTITY column links the parameter with the defined entities and their values. The VALUE column defines a variable name that can be used in future dialogue or for integration with APIs: [ 336 ] Chapter 9 Figure 9.11: Required parameters for the order intent For each parameter, we need to specify some prompts that the agent will use to ask the user for the information. Figure 9.12 below shows some example prompts for the size parameter. You may choose to configure your phrasings for the prompts: Figure 9.12: Prompt options for the size parameter [ 337 ] Building Conversational AI Applications with Deep Learning The last step in configuring the intent is configuring a response once the information is collected. This configuration is done in the Responses section and is shown in Figure 9.13: Figure 9.13: Response configuration for the order intent Note the use of $size.original and $crust.original in the response text. It uses the original terms used by the user while ordering when it repeats the order back. Finally, note that we set this intent as the end of the conversation as we have obtained all the information we needed to get. Our bot is ready to be trained and tested. Hit the blue Save button at the top of the page after you have configured the training phrases, action and parameters, and the responses. There is another section at the bottom called fulfilment. This allows connecting the intent with a web service to complete the intent. The bot can be tested using the right side. Note that though we configured only text, Dialogflow enables both text and voice interfaces. While we demonstrate the text interface here, you are encouraged to try the voice interface as well: [ 338 ] Chapter 9 Figure 9.14: An example of dialog showing the response processing and the variable being set Cloud-based solutions have made it quite easy to build task-oriented conversational agents for general uses. However, building an agent for a specialized domain like medical uses may require custom builds. Let's look at options for specific parts of such a system: • Intent identification: The simplest way to identify intent is to treat it as a classification problem. Given an utterance or input text, the model needs to classify it into several intents. Standard RNN-based architectures, like those seen in earlier chapters, can be used and adapted for this task. • Slot tagging: Tagging slots used in a sentence to correspond to inputs can be treated as a sequence classification problem. This is similar to the approach used in the second chapter, where named entities were tagged in a sequence of text. Bi-directional RNN models are quite effective in this part. [ 339 ] Building Conversational AI Applications with Deep Learning Different models can be developed for these parts, or they can be combined in one end-to-end model with a dialog manager. Dialog state tracking systems can be built by using a set of rules generated by experts or by using CRFs (see Chapter 2, Understanding Sentiment in Natural Language with BiLSTMs, for a detailed explanation). Recent approaches include a Neural Belief Tracker proposed by Mrkšić et al. in 2017 in their paper titled Neural Belief Tracker: Data-Driven Dialogue State Tracking. This system takes three inputs: 1. The last system output 2. The last user utterance 3. A slot-value pair from the possible candidates for slots These three inputs are combined through the content model and semantic decoding model and fed to a binary decision (softmax) layer to produce a final output. Deep reinforcement learning is being used to optimize the dialog policy overall. In the NLG part, the most common approach is to define a set of templates that can be dynamically populated. This approach was shown in the preceding figure Figure 9.13. Neural methods, such as semantically controlled LSTM, as proposed by Wen et al. in their paper Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems in 2015, are being actively researched. Now, let's move on to another interesting area of conversational agents – questionanswering and machine reading comprehension. Question-answering and MRC conversational agents Bots can be trained to answer questions based on information contained in a knowledge base (KB). This setting is called the question-answering setting. Another related area is machine reading comprehension or MRC. In MRC, questions need to be answered with respect to a set of passages or documents provided with the query. Both of these areas are seeing a lot of startup activity and innovation. A very large number of business use cases can be enabled with both of these types of conversational agents. Passing the financial report to a bot and answering questions such as the increase in revenue given the financial report would be an example of MRC. Organizations have large digital caches of information, with new information pouring in every day. Building such agents empowers knowledge workers to process and parse large amounts of information quickly. Startups like Pryon are delivering conversational AI agents that merge, ingest, and adapt a myriad of structured and unstructured data into unified knowledge domains that end users can ask natural language questions as a way to discover information. [ 340 ] Chapter 9 KBs typically consist of subject-predicate-object triples. The subject and object are entities, while the predicate indicates a relationship between them. The KB can be represented as a knowledge graph, where objects and subjects are nodes connected by predicate edges. A big challenge is the maintenance of such knowledge bases and graphs in real life. Most deep NLP approaches are focused on determining whether a given subject-predicate-object triplet is true or not. The problem is reduced to a binary classification through this reformulation. There are several approaches, including the use of BERT models, which can solve the classification problem. The key here is to learn an embedding of the KB and then frame queries on top of this embedding. Dat Nguyen's survey paper, titled A survey of embedding models of entities and relationships for knowledge graph completion, provides an excellent overview of various topics for a deeper dive. We focus on MRC for the rest of this section now. MRC is a challenging task as the objective is to answer any set of questions about a given set of passages or documents. These passages are not known in advance and may be of variable length. The most common research dataset used for evaluating models is the Stanford Question Answering Dataset or SQuAD, as it is commonly called. The dataset has 100,000 questions for different Wikipedia articles. The objective of the model is to output the span of text from the article that answers the question. A more challenging dataset has been published by Microsoft based on Bing queries. This dataset is called the MAchine Reading COmprehension or MARCO dataset. This dataset has over 1 million anonymized questions, with over 8.8 million passages extracted from over 3.5 million documents. Some of the questions in this dataset may not be answerable based on the passages, which is not the case with the SQuAD dataset, which makes this a challenging dataset. The second challenging aspect of MARCO as compared to SQuAD is that MARCO requires the generation of an answer by combining information from multiple passages, whereas SQuAD requires marking the span from the given passage. BERT and its variants such as ALBERT: A Lite BERT for Self-supervised Learning of Language Representations published at ICLR 2020 form the basis of most competitive baselines today. BERT architecture is well suited to this task as it allows passing in two pieces of input text separated by a [SEP] token. The BERT paper evaluated their language model on a number of tasks, including performance on the SQuAD task. Question tokens formed the first part of the pair, and the passage/document formed the second part of the pair. The output tokens corresponding to the second part, the passage, are scored to represent whether the token represents the start of the span or the end of the span. [ 341 ] Building Conversational AI Applications with Deep Learning A high-level depiction of the architecture is shown in Figure 9.15: Figure 9.15: BERT fine-tuning approach for SQuAD question answering A multi-modal aspect of question answering is Visual QA, which was briefly introduced in Chapter 7, Multi-modal Networks and Image Captioning with ResNets and Transformer. Analogous architectures to the one proposed for image captioning, which can take images as well as text tokens, are used for solving this challenge. The setting for QA above is called single turn because the user presents a question with a passage from where the question needs to be answered. However, people have conversations with a back and forth dialog. Such a setting is called multi-turn dialog. A follow-up question may have context from a previous question or answer in the conversation. One of the challenges in a multi-turn dialog is coreference resolution. Consider the following dialog: Person: Can you tell me the balance in my account #XYZ? Bot: Your balance is $NNN. Person: Can you transfer $MM to account #ABC from that account? "that" in the second instruction refers to account #XYZ, which was mentioned in the first question from the person. This is called coreference resolution. In a multi-turn conversation, resolving references could be quite complicated based on the distance between the references. Several strides have been made in this area with respect to general conversation bots, which we'll cover next. [ 342 ] Chapter 9 General conversational agents Seq2seq models provide the best inspiration for learning multi-turn general conversations. A useful mental model is that of machine translation. Similar to the machine translation problem, the response to the previous question can be thought of as a translation of that input into a different language – the response. Encoding more context into a conversation can be achieved by passing in a sliding window of the previous conversation turns instead of just the last question/statement. The term open-domain is often used to describe bots in this area as the domain of the conversation is not fixed. The bot should be able to discuss a wide variety of topics. There are several issues that are their own research topics. Lack of personality or blandness is one such problem. The dialog is very dry. As an example, we have seen the use of a temperature hyperparameter to adjust the predictability of the response in previous chapters. Conversational agents have a high propensity to generate "I don't know" responses due to a lack of specificity in the dialog. A variety of techniques, including GANs, can be used to address this. The Personalizing Dialogue Agents paper authored by Zhang et al. from Facebook outlines some of the approaches used to address this problem. Two recent examples that highlight the state of the art of writing human-like comments come from Google and Facebook. Google published a paper titled Towards a Human-like Open-Domain Chatbot, with a chatbot named Meena with over 2.6 billion parameters. The core model is a seq2seq model using an Evolved Transformer (ET) block for encoding and decoding. The model architecture has one ET block in the encoder and 13 ET block in the decoder. ET block was discovered through neural architecture search (NAS) on top of the Transformer architecture. A new human evaluation metric called Sensibleness and Specificity Average (SSA) was proposed in the paper. The current literature has a variety of different metrics being proposed for the evaluation of such open-domain chatbots with little standardization. Another example of an open-domain chatbot is described by Facebook on https:// ai.facebook.com/blog/state-of-the-art-open-source-chatbot/. This paper builds on several years of research and combines the work on personalization, empathy, and KBs into a blended model called BlenderBot. Similar to Google's research, different datasets and benchmarks are used to train this chatbot. The code for the bot has been shared on. ParlAI, by Facebook research, provides several models for chatbots through facebookresearch/ParlAI. [ 343 ] Building Conversational AI Applications with Deep Learning This is a very hot area of active research with a lot of action happening in it. Comprehensive coverage of this topic would take a book of its own. Hopefully, you have learned many techniques in this book that can be combined to build amazing conversational agents. Let's wrap up. Summary We discussed the various types of conversational agents, such as task-oriented, question-answering, machine reading comprehension, and general chit-chat bots. Building a conversational AI system is a very challenging task with many layers, and it is an area of active research and development. The material covered earlier in the book can also help in building various parts of chatbots. Epilogue First, let me congratulate you on reaching the end of the book. I hope this book helped you get a grounding in advanced NLP models. The main challenge facing a book such as this is that it will likely be obsolete by the time it reaches the press. The key thing is that new developments are based on past developments; for example, the Evolved Transformer is based on the Transformer architecture. Knowledge of all the models presented in the book will give you a solid foundation and significantly cut down the amount of time you need to spend to understand a new development. A set of influential and important papers for each chapter have also been made available in the GitHub repository. I am excited to see what you will discover and build next! [ 344 ] 10 Installation and Setup Instructions for Code Instructions for setting up an environment for the code in the book are provided in this chapter. These instructions: • Have been tested on macOS 10.15 and Ubuntu 18.04.3 LTS. You may have to translate these instructions for Windows. • Only cover the CPU version of TensorFlow. For the latest GPU installation instructions, please follow. Please note that the use of a GPU is highly recommended. It will cut down the training times of complex models from days to hours. The installation uses Anaconda and pip. It is assumed that Anaconda is set up and ready to go on your machine. Note that we use some new and some uncommon packages. These packages may not be available through conda. We will use pip in such cases. Notes: • On macOS: conda 49.2, pip 20.3.1 • On Ubuntu: conda 4.6.11, pip 20.0.2 [ 345 ] Installation and Setup Instructions for Code GitHub location The code for this book is located in the following public GitHub repository: Please clone this repository to access all the code for the book. Please note that seminal papers for each of the chapters are included in the GitHub repository inside each chapter's directory. Now, the common steps to set up the conda environment are explained below: • Step 1: Create a new conda environment with Python 3.7.5: $ conda create -n tf24nlp python==3.7.5 The environment is named tf24nlp but feel free to use your own name and make sure you use that in the following steps. I like to prefix my environment names with the version of TensorFlow being used and I suffix a "g" if that environment has a GPU version of the library. As you can probably infer, we are going to use TensorFlow 2.4. • Step 2: Activate the environment and install the following packages: $ conda activate tf24nlp (tf24nlp) $ conda install pandas==1.0.1 numpy==1.18.1 This installs the NumPy and pandas libraries in our newly created environment. • Step 3: Install TensorFlow 2.4. To do this, we will need to use pip. As of the time of writing, the conda distribution of TensorFlow was still at 2.0. TensorFlow has been moving quite fast. In general, conda distributions are a little behind the latest versions available: (tf24nlp) $ pip install tensorflow==2.4 Please note that these instructions are for the CPU version of TensorFlow. For GPU installation instructions, please refer to install/gpu. [ 346 ] Chapter 10 • Step 4: Install Jupyter Notebook – feel free to install the latest version: (tf24nlp) $ conda install Jupyter The rest of the installation instructions are about specific libraries used in specific chapters. If you have trouble installing through Jupyter Notebook, you can install them from the command line. Specific instructions for each of the chapters are given as follows. Chapter 1 installation instructions No specific instructions are required for this chapter, as the code for this chapter is run on Google Colab, at colab.research.google.com. Chapter 2 installation instructions The tfds package needs to be installed: (tf24nlp) $ pip install tensorflow_datasets==3.2.1 We use tfds in most of the chapters going forward. Chapter 3 installation instructions 1. Install matplotlib via the following command: (tf24nlp) $ conda install matplotlib==3.1.3 A newer version may work as well. 2. Install the TensorFlow Addons package for Viterbi decoding: (tf24nlp) $ pip install tensorflow_addons==0.11.2 Note that this package is not available through conda. [ 347 ] Installation and Setup Instructions for Code Chapter 4 installation instructions This chapter requires the installation of sklearn: (tf24nlp) $ conda install scikit-learn==0.23.1 Hugging Face's Transformers library needs to be installed as well: (tf24nlp) $ pip install transformers==3.0.2 Chapter 5 installation instructions None required. Chapter 6 installation instructions A library that will be used to compute ROUGE scores needs to be installed: (tf24nlp) $ pip install rouge_score Chapter 7 installation instructions We require the Pillow library for processing images. This library is the friendly version of the Python Imaging Library. It can be installed like so: (tf24nlp) conda install pillow==7.2.0 TQDM is a nice utility to display progress bars while executing long loops: (tf24nlp) $ conda install tqdm==4.47.0 Chapter 8 installation instructions Snorkel needs to be installed. At the time of writing, the version of Snorkel installed was 0.9.5. Note that this version of Snorkel uses older versions of pandas and TensorBoard. You should be able to safely ignore any warnings about mismatched versions for the purposes of the code in this book. However, if you continue to face conflicts in your environment, then I suggest creating a separate Snorkel-specific conda environment. [ 348 ] Chapter 10 Run the labeling functions in that environment and store the outputs as a separate CSV file. TensorFlow training can be run by switching back to the tf24nlp environment and loading the labeled data in: (tf24nlp) $ pip install snorkel==0.9.5 We'll also use BeautifulSoup for parsing HTML tags out of the text: (tf24nlp) $ conda install beautifulsoup4==4.9 There is an optional section in the chapter that involves plotting word clouds. This requires the following package to be installed: (tf24nlp) $ pip install wordcloud==1.8 Note that this chapter also uses NLTK, which we installed in the first chapter. Chapter 9 installation instructions None. Share your experience Thank you for taking the time to read this book. If you enjoyed this book, help others to find it. Leave a review at. [ 349 ] Other Books You May Enjoy If you enjoyed this book, you may be interested in these other books by Packt: Deep Learning with TensorFlow 2 and Keras - Second Edition Antonio Gulli Amita Kapoor Sujit Pal ISBN: 978-1-83882-341-2 ● Build machine learning and deep learning systems with TensorFlow 2 and the Keras API ● Use Regression analysis, the most popular approach to machine learning ● Understand ConvNets (convolutional neural networks) and how they are essential for deep learning systems such as image classifiers [ 351 ] Other Books You May Enjoy ● Use GANs (generative adversarial networks) to create new data that fits with existing patterns ● Discover RNNs (recurrent neural networks) that can process sequences of input intelligently, using one part of a sequence to correctly interpret another ● Apply deep learning to natural human language and interpret natural language texts to produce an appropriate response ● Train your models on the cloud and put TF to work in real environments ● Explore how Google tools can automate simple ML workflows without the need for complex modeling [ 352 ] Other Books You May Enjoy Transformers for Natural Language Processing Denis Rothman ISBN: 978-1-80056-579-1 ● Use the latest pre-trained transformer models ● Grasp the workings of the original Transformer, GPT-2, BERT, T5, and other transformer models ● Create language understanding Python programs using concepts that outperform classical deep learning models ● Use a variety of NLP platforms, including Hugging Face, Trax, and AllenNLP ● Apply Python, TensorFlow, and Keras programs to sentiment analysis, text summarization, speech recognition, machine translations, and more ● Measure productivity of key transformers to define their scope, potential, and limits, in production [ 353 ] Index A abstractive summaries examples 186, 187 Adaptive Moment Estimation (Adam Optimizer) 119 Attention mechanism 123 Audio-Visual Speech Recognition (AVSR) 228 data, vectorizing 296, 297 training, on weakly supervised data from Snorkel 322-324 used, for training 297-300 BiLSTM model 65-69 building 83-86 bottleneck design 244 Byte Pair Encoding (BPE) 26, 117, 132 B C Bahdanau Attention architecture 126 Bahdanau attention layer 197-199 Batch Normalization (BatchNorm) 245 beam search 171, 180 used, for decoding penalties 218-220 used, for improving text summarization 214-217 BERT-based transfer learning 123 attention model 125, 127 encoder-decoder networks 123, 124 transformer model 128, 130 BERT fine-tuning approach for SQuAD question answering 341, 342 bidirectional encoder representations from transformers (BERT) model 131-133 custom layers, building 142-147 normalization 133-139 sequences 135 tokenization 133-139 Bi-directional Long Short-Term Memory (BiLSTM) 25, 47 Bilingual Evaluation Understudy (BLEU) 221, 280 BiLSTM baseline model 295 data tokenization 296, 297 captions generating 274-280 cloud-based solutions, for building task-x task-oriented conversational agents intent identification 339 slot tagging 339 Common Objects in Context (COCO) URL 235 conda environment setting up 346 Conditional Random Fields (CRFs) working 87, 89 Consensus-Based Image Description Evaluation (CIDEr) 280 constructor parameters 195, 196 context-free vectorization 36 Continuous Bag-of-Words 41 Continuous Skip-gram 41 conversational agents overview 328-338 conversational AI applications 330 conversation, with bot example 331 [ 355 ] Convolutional Neural Networks (CNNs) convolutions 240, 241 image processing with 239 key properties 239 pooling 241 regularization, with dropout 242, 243 residual connections 243-245 ResNets 243-245 count-based vectorization 34 modeling after 35, 36 custom CRF Layer implementing 91, 92 custom CRF model implementing 93, 94 training, with loss function 94, 95 custom training implementing 95-99 encoding 58 F feature extraction model 110, 116-120 creating 121, 122 forward pass 101 G data loading 75-79 modeling, with Parts-of-Speech (POS) tagging 30, 31 modeling, with stop words removed 24-26 normalizing 80-83 vectorizing 80-83 data locality 232 Decoder model 199-202 training 202-207 Dialogflow agents configuration 333 console access 332 URL 332 domain adaptation 107 domains 107 dropout layer 102 Gap Sentence Generation (GSG) 224 gated recurrent units (GRUs) 49, 51 gazetteer 73 URL 73 General Attention 125 general conversational agents 343, 344 generating text model training 155-159 generative adversarial networks (GANs) 289 Generative Pre-Training (GPT-2) model 171-177 using, for text generation 177-183 Global Vectors for Word Representation (GloVe) 110 GloVe embeddings 111 used, for creating pre-trained embedding matrix 115, 116 used, for performing IMDb sentiment analysis 110 Google Colab GPUs, enabling on 7, 8 gradient clipping 49 greedy search used, for improving text summarization 210-214 using, for text generation 164-171 Groningen Meaning Bank (GMB) dataset 74 E H embeddings 40 encoder 56 encoder-decoder network 123 Encoder model 194-197 training 202-207 Hidden Markov Model (HMM)-based models 25 high-level image captioning model building, steps 234 human-computer interaction (HCI) 46 D [ 356 ] I image captioning 232-234 MS-COCO dataset, using for 235-238 performance, improving 281, 282 image feature extraction performing, with ResNet50 245-249 image processing with CNNs 239 with ResNet50 239 IMDb sentiment analysis improving, with weakly supervised labels 290 performing, with GloVe embeddings 110 IMDb training data loading 112-114 inner workings, of weak supervision with labeling functions 288-290 In-Other-Begin (IOB) 77 Inverse Document Frequency 37 K knowledge base (KB) 340 L labeled data collecting 3 development environment setup, for collection of 4-6 labeling functions 288 iterating on 304-306 language models (LM) 128 training cost 172 layer normalization 174 learning rate annealing 159 learning rate decay 159 implementing, as custom callback 159-164 learning rate warmup 160 lemma 32 lemmatization 31-33 longest common subsequence (LCS) 222 Long-Short Term Memory (LSTM) 49 cell value 50 forget gate 50 input gate 50 output gate 50 Long Short-Term Memory (LSTM) networks 50, 51 LSTM model with embeddings 62-65 M Machine Learning (ML) project 2 MAchine Reading COmprehension database (MARCO) 341 masked language model (MLM) objective 131, 224 Max pooling 241 Metric for Evaluation of Translation with Explicit Ordering (METEOR) 221 morphology 32 MRC conversational agents 340, 341 MS-COCO dataset used, for image captioning 235-238 Multi-Head Attention block 130 multi-modal deep learning 228 language tasks 229-231 vision 229-231 multi-task learning 108, 109 N Naïve-Bayes (NB) 306 Naïve-Bayes (NB) model used, for finding keywords 306-313 Named Entity Recognition (NER) 72-74 GMB dataset 74, 75 using, with BiLSTM 89 using, with CRFs 89, 90 natural language generation (NLG) 340 Natural Language Processing (NLP) 229 natural language understanding (NLU) 46 natural language understanding (NLU) module 328 NER datasets URL 73 News Aggregator dataset 151 normalization 55 [ 357 ] P S padding 58-60 Parts-of-Speech (POS) tagging 26-30 data, modeling with 30, 31 penalties coverage normalization 218-220 decoding, with beam search 218 length normalization 218 performance optimization with tf.data 61 Porter stemmer 31 prebuilt BERT classification model 139-141 pre-process IMDb dataset 291-294 pre-trained embedding matrix creating, with GloVe embeddings 115, 116 pre-trained GloVe embeddings loading 114, 115 pre-training 106 segmentation 13, 15 in Japanese 13-18 self-attention 126 sentence compression 186 sentiment classification, with LSTMs 51, 52 data, loading 52-55 seq2seq model 123 building, with attention layer 193 seq2seq model, with attention layer Bahdanau attention layer 197-202 building 193, 194 Decoder model 199 Encoder layer 194-197 sequential learning 109, 110 Skip-gram Negative Sampling (SGNS) 41 Snorkel used, for weakly supervised labelling 300-304 sparse representations 38 Stanford Question Answering Dataset (SQuAD) 3, 341 state-of-the-art approach 224, 225 state-of-the-art models 281, 282 stemming 31-33 Stochastic Gradient Descent (SGD) 174 stop word removal 20-25 stride length 240 subject matter experts (SMEs) 287 subword tokenization 132 subword tokenizer 294, 295 summaries generating 207-210 Q question-answering setting 340 R ragged tensors 59 re3d URL 73 Recall-Oriented Understudy for Gisting Evaluation (ROUGE) 221 Recurrent Neural Networks (RNNs) 7 building blocks 48, 49 representation learning 40 ResNet50 image feature extraction, performing with 245-249 image processing with 239 root morpheme 32 ROUGE metric evaluating 221-224 ROUGE-L 222 ROUGE-N 221 ROUGE-S 222 ROUGE-W 222 T tasks 107 teacher forcing process 200 temperature 167 Term Frequency - Inverse Document Frequency (TF-IDF) 37, 38 Term Frequency (TF) 37 text generation character-based approach 150 data loading 151, 152 [ 358 ] data normalization 152-154 data pre-processing 151, 152 data tokenization 152-154 GPT-2 model, using 177-183 improving, with greedy search 164-171 text normalization 8-10 normalized data, modeling 11-13 stop word removal 20-24 tokenization 13 text processing workflow 2 data collection 2, 3 data labeling 2, 3 stages 2 text normalization 8 text vectorization 33 text summaries data loading 188, 189 data pre-processing 188, 189 data tokenization 190-193 data vectorization 190-193 evaluating 221 generating 207 overview 186, 188 text summaries, approaches abstractive summarization 186 extractive summarization 186 text vectorization 33, 34 count-based vectorization 34 Term Frequency - Inverse Document Frequency (TF-IDF) 37 word vectors 40 TF-IDF features used, for modeling 39 tokenization 55, 58 tokenized data modeling 19, 20 tokenizer 56 Top-K sampling using, for text generation 181-183 transfer learning considerations 106 overview 106 types 107 transfer learning, types domain adaptation 107, 108 multi-task learning 108, 109 sequential learning 109, 110 Transformer architecture 123 Transformer model 125, 249-251 creating 263, 264 Decoder 260-263 masks 251-253 multi-head attention 253-256 positional encoding 251-253 scaled dot-product 253-256 training, with VisualEncoder 264 VisualEncoder 257-260 Transformer model, training with VisualEncoder 264 checkpoints 270, 271 custom learning rate schedule 268, 269 custom training 272-274 instantiating 267, 268 loss function 270 masks 270, 271 metrics 270 training data, loading 265, 267 U Universal POS (UPOS) tags 26 unsupervised labels generating, unlabeled data 319-322 V vectorization 55 Visual Commonsense Reasoning (VCR) 230 URL 230 visual grounding 231 Visual Question Answering (VQA) 229 URL 229 VisualEncoder Transformer model, training with 264 Viterbi decoder 99 Viterbi decoding 99, 100 first word label probability 101-103 [ 359 ] W weakly supervised data, from Snorkel BiLSTM baseline model, training on 322-324 weakly supervised labelling with Snorkel 300-304 weakly supervised labels evaluating, on training data set 314-318 using, to improve IMDb sentiment analysis 290 weak supervision 286-288 Windows Subsystem for Linux (WSL) 152 Word2Vec embeddings using, with pretrained models 42, 43 WordPiece tokenization 132 word vectors 40, 41 [ 360 ]
https://dokumen.pub/advanced-natural-language-processing-with-tensorflow-2-build-real-world-effective-nlp-applications-using-ner-rnns-seq2seq-models-tran-1800200935-9781800200937.html
CC-MAIN-2021-49
refinedweb
73,951
55.24
Hello, Hello <?xml:namespace prefix = o Thanks for your request. I managed to reproduce the problem on my side. Your request has been linked to the appropriate issue. You will be notified as soon as it is fixed. Best regards, Hi, Andrey. I’m very interested in resolving this issue. Could you please help me? I can send you additional materials if it’s necessary. Our tech support is prolonged… Hi Ilya, <?xml:namespace prefix = o Thanks for your request. This issue will be fixed and included into the next hotfix, which will be released within 4-5 weeks. You will be notified. Also could you please attach your document here for testing? Best regards, Hello Alexander and Ilya.<?xml:namespace prefix = o I have inspected the source document and figured out the cause of the problem. This might give you a feasible workaround. The list is a so-named legacy list coming from Microsoft Word 6. List items are indented incorrectly because Aspose.Words does not process legacy lists perfectly. So you can recreate the list using ordinary list formatting technique. Newer versions of Microsoft Word never create legacy lists. Please ask me if there are any difficulties with document refactoring. Regards,Regards, I have looked at this too. The lists are "legacy" lists. There is documentation that explains how to process such lists, but it does not appear to be correct because MS Word renders the list not according to that description. I cannot figure out from the data in the list how to render it correctly. I am postponing this issue. We will not be working on it until we see more customer reports or until we have figured out a suitable solution. The issues you have found earlier (filed as 14758) have been fixed in this update. The issues you have found earlier (filed as 14758) have been fixed in this update. This message was posted using Notification2Forum from Downloads module by aspose.notifier.
https://forum.aspose.com/t/converting-word-to-pdf-cause-different-linebreaks/75739
CC-MAIN-2022-21
refinedweb
328
69.58
MPI_Init_threadInitialize the MPI execution environment int MPI_Init_thread( int *argc, char ***argv, int required, int *provided ); int MPI_Init_thread( int *argc, wchar_t ***argv, int required, int *provided ); Parameters - argc - [in] Pointer to the number of arguments - argv - [in] Pointer to the argument vector - required - [in] Level of desired thread support - provided - [out] Level of provided thread support Command line argumentsMPI specifies no command-line arguments but does allow an MPI implementation to make use of them. See MPI_INIT for a description of the command line arguments supported by MPI_INIT and MPI_INIT_THREAD. Remarks Advice to users. In C and C++, the passing of argc and argv is optional. In C, this is accomplished by passing the appropriate null pointer. In C++, this is accomplished with two separate bindings to cover these two cases. (.These values are monotonic; i.e., MPI_THREAD_SINGLE < MPI_THREAD_FUNNELED < MPI_THREAD_SERIALIZED < MPI_THREAD_MULTIPLE.. Notes for FortranNote that the Fortran binding for this routine does not have the argc and argv arguments. (MPI_INIT_THREAD(required, provided, ierror)). See AlsoMPI_Init, MPI_Finalize Example Code The following sample code illustrates MPI_Init_thread.#include "mpi.h" #include <stdio.h> int main( int argc, char *argv[] ) { int errs = 0; int provided, flag, claimed; MPI_Init_thread( 0, 0, MPI_THREAD_MULTIPLE, &provided ); MPI_Is_thread_main( &flag ); if (!flag) { errs++; printf( "This thread called init_thread but Is_thread_main gave false\n" );fflush(stdout); } MPI_Query_thread( &claimed ); if (claimed != provided) { errs++; printf( "Query thread gave thread level %d but Init_thread gave %d\n", claimed, provided );fflush(stdout); } MPI_Finalize(); return errs; }
http://mpi.deino.net/mpi_functions/mpi_init_thread.html
CC-MAIN-2015-14
refinedweb
239
55.95
Lesson 2 - First object-oriented app in Swift In the previous lesson, Introduction to object-oriented programming in Swift, Swift basic constructs course we'll create what must always be done when you encounter a new paradigm in programming, a "Hello World" program. A Hello object world program, to be precise! We'll start by creating a new console Swift application (Command Line Tool) in Xcode as we're used to. In Project navigator on the left, right-click on our project folder and choose "New File...". In the dialog, choose "Swift File" and continue by clicking on "Next". We'll name the new file Greeter.swift can see a new source file now. In the comment block you can see, for example, the date of creation. We can also see import Foundation for importing the Swift basic functionality. We have to declare the class by ourselves, but it's easy: class Greeter { }.swift file. In our case, it may seem useless, but in more complex applications it'll prove to be more than worthwhile In Swift, classes aren't gathered in packages or namespaces, as in some other programming languages. Instead, the elementary unit is a Swift file, and it's not necessary to have all the code written within a class. For example, it's a common practice to create a Constants.swift file containing just String constants declared using let to have them all in one place to avoid typo errors. The second "unit" in Swift are modules. Apple defines them as "single unit of distribution". A module is simply something that contains the intended functionality and can be used in various programs. Our new project is actually a module itself, but it doesn't make sense to use it in other applications. Similarly, Foundation is a module, or later UIKit, which is imported as a base of iOS applications instead of Foundation (because it contains Foundation). Next, we'll add a greet() method to the Greeter class, which will be publicly visible and won't return a value or take any parameters. In Swift, we declare methods as follows: [access modifier] func [methodName]([parameters]) -> [return type] We can omit the access modifier before the method, Swift uses internal as default, meaning the method can be accessed only from within the module (which is the whole application in our case). As next, we write the method's name. We name methods in a similar fashion as variables, using camelCase, but the very first letter is lowercase. Parentheses with parameters are required, we'll leave them empty since the method won't have any parameters. In the method body, we'll write code that prints a message to the console. Our class will now look like this: class Greeter { func greet() { print("Hello object world!") } } We're finished here for now, let's move to main.swift. Now, in the main file, we'll create an instance of the Greeter class. It'll be the greeter object which we'll work with further. We store objects in variables and use the class name as the data type. An instance typically has the same name as its class, only the very first letter is lowercase. Let's declare the variable and then create a new instance of the Greeter class: let greeter : Greeter greeter = Greeter() The first line says: "I want a greeter variable which will later contain a Greeter class instance inside. We've worked with variables like this before. On the first line, we only make the declaration, so let can be used. After the first assignment, it will no longer be possible to assign a new instance to the greeter variable. On the second line, we create a new instance of the Greeter class using parentheses. Swift created the implicit parameterless constructor. So creating an instance of an object is similar to calling a method. Of course, the entire code can be shortened to: let greeter = Greeter() Since now we have a Greeter class instance in a variable, we can let it greet the user. We'll call the greet() method as greeter.greet(). The main.swift file will now look like this: import Foundation let greeter : Greeter greeter = Greeter() greeter.greet() Let's run the program. Hello object world! We've successfully made our first object-oriented app! Now let's add a name parameter to our greet() method, so it could greet the user properly: func greet(name: String) { print("Hi \(name)!") } We can see the syntax of the method parameter is the same as the syntax of a variable. If we wanted more parameters, we'd separate them with commas. Let's modify our main.swift file now: let greeter : Greeter greeter = Greeter() greeter.greet(name: "Carl") greeter.greet(name: "Peter") Our code is now in a method and we're able to call it multiple times with different parameters. We don't have to copy "Hi ..." twice. We'll separate our code logically into methods from now. A big difference between Swift and other languages is providing the parameter name when calling a method. In our case name. The program won't work without it. If we wanted to remove the parameter names, we'd write _ before the parameter name in the method declaration. That way we won't have to write the names when calling the method. func greet(_ name: String) { print("Hi \(name)!") } Calling the method would then look like this: greeter.greet("Carl") The output: Hi Carl Hi Peter Let's add some field (attribute) to the class, e.g. a text where the greeting will be stored. We declare fields as variables as well. As it was with methods, if we omit the field's modifier, Swift assumes that it's iternal, which we're OK with. Let's modify our class: class Greeter { var text: String = "" func greet(_ name: String) { print("\(text) \(name)!") } } We set the text field to an empty String. Otherwise, we'd have to deal with an Optional or a custom constructor. We'll now initialize the text of the instance created in main.swift: let greeter = Greeter() greeter.text = "Hi" greeter.greet("Carl") greeter.greet("Peter") greeter.text = "Hello programmer" greeter.greet("Richard") The output:.swift file. The advantage to designing objects with a single responsibility is that they're then universal and reusable. The object can only output text to the console now, but we'll change it so the method will only return the text and it'll be up to the recipient to know what to do with it. We could also store greetings into files, print them on websites or process them further. Since we want the method to return a String value, we'll set the String return type to it. We use the return keyword to return a value. Return terminates a method and returns a value. Any code in the method's body after the return will not be executed! Let's modify both classes: The greet() method in Greeter.swift: func greet(_ name: String) -> String { return "\(text) \(name)!" } The modified code in main.swift: let greeter = Greeter() greeter.text = "Hi" print(greeter.greet("Carl")) print(greeter.greet("Peter")) greeter.text = "Hello programmer" print(greeter.greet("Richard")) Now, our code follows the guidelines of good OOP and over all programming practices. Great! Our program already has some quality to it, despite it being relatively useless. If you want, you can try to create an object-oriented remake of our console calculator. In the next lesson, RollingDie in Swift - Constructors and Random numbers, we'll program a simple game. We'll make two objects, warriors, compete in an arena, which will also be an object. See? Now you have something to look forward to! No one has commented yet - be the first!
https://www.ictdemy.com/swift/oop/first-object-oriented-app-in-swift
CC-MAIN-2021-31
refinedweb
1,309
75.1
Javadoc comment is multiline comment /* */ that starts with * character and placed above class definition, interface definition, enum definition, method definition or field definition. For example, here is java file: /** * My <b>class</b>. * @see AbstractClass */ public class MyClass { } * My <b>class</b>. * @see AbstractClass Please not that javadoc-like comment inside a method is not a javadoc comment and skiped by Sun/Oracle javadoc tool and by our parser. In internet you can find different types of documentation generation tools similar to javadoc. Such tools rely on specific Identificator: "!", "#", "$". Comments looks like "/*! some comment */" , "/*# some comment */" , "/*$ some comment */". Such multiline comments are not a javadoc. Javadoc by specification could contain any HTML tags that let user generate content he needs. All tags are copied as is to result javadoc html pages by Sun/Oracle javadoc tool. All bad formatting is responsibility of user and web-browser. To validate Chekcstyle to parse input to predictable structure - Abstract Syntax Tree(AST). It is very difficult to parse free style format, so input text need to follow some format, so limitation appears. The comment should be written in Tight-HTML to build nested AST Tree that most Checks expect. For more details about parsing of HTML into AST read HTML Code In Javadoc Comments and Javadoc parser behavior section. Every HTML tag should have matching end HTML tag or it is a void element. The only exceptions are HTML 4 tags whose end tag is optional(ommitable) by HTML specification (example is TR), so, Checkstyle won't show error about missing end tag, however, it leads to broken Tight-HTML structure and as a result leads to not-nested content of the HTML tags in Abstract Syntax Tree of the Javadoc comment. In other words, if HTML tags are not closed Javadoc grammar cannot determine content of these tags, so structure of the parse tree will not be nested like it is while using Tight-HTML code. It is done just to not fail on every Javadoc comment, because there are tons of using unclosed tags, etc. Principle of writing Javadoc Checks is similar to writing regular Checks. You just extend another abstract class and use another token types. To start implementing new Check create a new class and extend AbstractJavadocCheck. It has two abstract methods you should implement: Java grammar parses java file base on Java language specifications. So, there are singleline comments and multiline/block comments in it. Java compiler doesn't know about Javadoc because it is just a multiline comment. To parse multiline comment as a Javadoc comment, checkstyle has special Parser that is based on ANTLR Javadoc grammar. So, it's supposed to proccess block comments that start with Javadoc Identificator and parse them to Abstract Syntax Tree (AST). The difference is that Java grammar uses ANTLR v2, while Javadoc grammar uses ANTLR v4. Because of that, these two grammars and their trees are not compatible. Java AST consists of DetailAST objects, while Javadoc AST consists of DetailNode objects. Main Java grammar skips any whitespaces and newlines, so in Java Abstract Syntax Tree there are no whitespace/newline nodes. In Javadoc comment every whitespace matters, and Javadoc Checks need all those whitespaces and newline nodes to verify format and content of the Javadoc comment. Because of that Javadoc grammar includes all whitespaces, newlines to the parse tree (WS, NEWLINE). Checkstyle can print Abstract Syntax Tree for Java and Javadoc trees. You need to run checkstyle jar file with -J argument, providing java file. For example, here is MyClass.java file: /** * My <b>class</b>. * @see AbstractClass */ public class MyClass { } Command: java -jar checkstyle-X.XX-all.jar -J MyClass.java Output: CLASS_DEF -> CLASS_DEF [5:0] |--MODIFIERS -> MODIFIERS [5:0] | |--BLOCK_COMMENT_BEGIN -> /* [1:0] | | |--COMMENT_CONTENT -> *\n * My <b>class</b>.\n * @see AbstractClass\n [1:2] | | | `--JAVADOC -> \n * My <b>class</b>.\n * @see AbstractClass\n <EOF> [1:0] | | | |--NEWLINE -> \n [1:0] | | | |--LEADING_ASTERISK -> * [2:0] | | | |--TEXT -> My [2:2] | | | |--HTML_ELEMENT -> <b>class</b> [2:6] | | | | `--HTML_TAG -> <b>class</b> [2:6] | | | | |--HTML_ELEMENT_OPEN -> <b> [2:6] | | | | | |--OPEN -> < [2:6] | | | | | |--HTML_TAG_NAME -> b [2:7] | | | | | `--CLOSE -> > [2:8] | | | | |--TEXT -> class [2:9] | | | | `--HTML_ELEMENT_CLOSE -> </b> [2:14] | | | | |--OPEN -> < [2:14] | | | | |--SLASH -> / [2:15] | | | | |--HTML_TAG_NAME -> b [2:16] | | | | `--CLOSE -> > [2:17] | | | |--TEXT -> . [2:18] | | | |--NEWLINE -> \n [2:19] | | | |--LEADING_ASTERISK -> * [3:0] | | | |--WS -> [3:2] | | | |--JAVADOC_TAG -> @see AbstractClass\n [3:3] | | | | |--SEE_LITERAL -> @see [3:3] | | | | |--WS -> [3:7] | | | | |--REFERENCE -> AbstractClass [3:8] | | | | | `--CLASS -> AbstractClass [3:8] | | | | |--NEWLINE -> \n [3:21] | | | | `--WS -> [4:0] | | | `--EOF -> <EOF> [4:1] | | `--BLOCK_COMMENT_END -> */ [4:1] | `--LITERAL_PUBLIC -> public [5:0] |--LITERAL_CLASS -> class [5:7] |--IDENT -> MyClass [5:13] `--OBJBLOCK -> OBJBLOCK [5:21] |--LCURLY -> { [5:21] `--RCURLY -> } [7:0] As you see very small java file transforms to a huge Abstract Syntax Tree, because that is the most detailed tree including all components of the java file: classes, methods, comments, etc. In most cases while developing Javadoc Check you need only parse tree of the exact Javadoc comment. To do that just copy Javadoc comment to separate file and remove /** at the begining and */ at the end. After that, run checkstyle with -j argument. MyJavadocComment.javadoc file: * My <b>class</b>. * @see AbstractClass Command: java -jar checkstyle-X.XX-all.jar -j MyJavadocComment.javadoc Output: JAVADOC -> * My <b>class</b>.\r\n * @see AbstractClass<EOF> [0:0] |--LEADING_ASTERISK -> * [0:0] |--TEXT -> My [0:2] |--HTML_ELEMENT -> <b>class</b> [0:6] | `--HTML_TAG -> <b>class</b> [0:6] | |--HTML_ELEMENT_OPEN -> <b> [0:6] | | |--OPEN -> < [0:6] | | |--HTML_TAG_NAME -> b [0:7] | | `--CLOSE -> > [0:8] | |--TEXT -> class [0:9] | `--HTML_ELEMENT_CLOSE -> </b> [0:14] | |--OPEN -> < [0:14] | |--SLASH -> / [0:15] | |--HTML_TAG_NAME -> b [0:16] | `--CLOSE -> > [0:17] |--TEXT -> . [0:18] |--NEWLINE -> \r\n [0:19] |--LEADING_ASTERISK -> * [1:0] |--WS -> [1:2] |--JAVADOC_TAG -> @see AbstractClass [1:3] | |--SEE_LITERAL -> @see [1:3] | |--WS -> [1:7] | `--REFERENCE -> AbstractClass [1:8] | `--CLASS -> AbstractClass [1:8] `--EOF -> <EOF> [1:21] For example, to write a JavadocCheck that verifies @param tags in Javadoc comment of a method definition, you also need all method's parameter names. To get method definition AST you should access java DetailAST tree from javadoc Check. For this purpose use getBlockCommentAst() method that returns DetailAST node. Example: class MyCheck extends AbstractJavadocCheck { @Override public int[] getDefaultJavadocTokens() { return new int[]{JavadocTokenTypes.PARAMETER_NAME}; } @Override public void visitJavadocToken(DetailNode paramNameNode) { String javadocParamName = paramNameNode.getText(); DetailAST blockCommentAst = getBlockCommentAst(); if (BlockCommentPosition.isOnMethod(blockCommentAst)) { DetailAST methodDef = blockCommentAst.getParent(); DetailAST methodParam = findMethodParameter(methodDef); String methodParamName = methodParam.getText(); if (!javadocParamName.equals(methodParamName)) { log(methodParam, "params.dont.match"); } } } } Checkstyle supports HTML4 tags in Javadoc comments: all HTML4 elements. HTML4 is picked just to have a list of elements whose end tag is optional(ommitable) and a list of void elements (also known as empty html tags, for example BR tag). HTML4 elements whose end tag is optional(ommitable): <P>, <LI>, <TR>, <TD>, <TH>, <BODY>, <COLGROUP>, <DD>, <DT>, <HEAD>, <HTML>, <OPTION>, <TBODY>, <THEAD>, <TFOOT>. Void HTML4 elements: <AREA>, <BASE>, <BASEFONT>, <BR>, <COL>, <FRAME>, <HR>, <IMG>, <INPUT>, <ISINDEX>, <LINK>, <META>, <PARAM>. To make Checkstyle support HTML5 tags whose end tag is optional(ommitable) and HTML5 void elements we should update Javadoc Parser bacause each element that breaks Tight-HTML rules have to be defined in Javadoc grammar. In future we should update Javadoc grammar if those tag lists extend (new tags, new HTML standard, etc.). (We already have an issue on updating Javadoc grammar to HTML5) If Checkstyle meets unknown tag (for example HTML5 tag) it doesn't fail and parses this tag as HTML_TAG Javadoc token type. Just follow Tight-HTML rules to make Checkstyle javadoc parser make nested AST, even though tags are unknown. <audio><source src="horse.ogg" type="audio/ogg"/></audio> JAVADOC -> <audio><source src="horse.ogg" type="audio/ogg"/></audio><EOF> [0:0] |--HTML_ELEMENT -> <audio><source src="horse.ogg" type="audio/ogg"/></audio> [0:0] | `--HTML_TAG -> <audio><source src="horse.ogg" type="audio/ogg"/></audio> [0:0] | |--HTML_ELEMENT_OPEN -> <audio> [0:0] | | |--OPEN -> < [0:0] | | |--HTML_TAG_NAME -> audio [0:1] | | `--CLOSE -> > [0:6] | |--HTML_ELEMENT -> <source src="horse.ogg" type="audio/ogg"/> [0:7] | | `--SINGLETON_ELEMENT -> <source src="horse.ogg" type="audio/ogg"/> [0:7] | | `--SINGLETON_TAG -> <source src="horse.ogg" type="audio/ogg"/> [0:7] | | |--OPEN -> < [0:7] | | |--HTML_TAG_NAME -> source [0:8] | | |--WS -> [0:14] | | |--ATTRIBUTE -> src="horse.ogg" [0:15] | | | |--HTML_TAG_NAME -> src [0:15] | | | |--EQUALS -> = [0:18] | | | `--ATTR_VALUE -> "horse.ogg" [0:19] | | |--WS -> [0:31] | | |--ATTRIBUTE -> type="audio/ogg" [0:32] | | | |--HTML_TAG_NAME -> type [0:32] | | | |--EQUALS -> = [0:36] | | | `--ATTR_VALUE -> "audio/ogg" [0:37] | | `--SLASH_CLOSE -> /> [0:49] | `--HTML_ELEMENT_CLOSE -> </audio> [0:51] | |--OPEN -> < [0:51] | |--SLASH -> / [0:52] | |--HTML_TAG_NAME -> audio [0:53] | `--CLOSE -> > [0:58] `--EOF -> <EOF> [0:59] Here is what you get if unknown tag doesn't have matching end tag (for example, HTML5 tag <audio>): Input: <audio>test [ERROR:0] Javadoc comment at column 1 has parse error. Missed HTML close tag 'audio'. Sometimes it means that close tag missed for one of previous tags. There are also HTML tags that are marked as "Not supported in HTML5" (HTML Element Reference). Checkstyle Javadoc parser can parse those tags too if they are written in Tight-HTML. Example. Input: <acronym title="as soon as possible">ASAP</acronym> JAVADOC -> <acronym title="as soon as possible">ASAP</acronym><EOF> [0:0] |--HTML_ELEMENT -> <acronym title="as soon as possible">ASAP</acronym> [0:0] | `--HTML_TAG -> <acronym title="as soon as possible">ASAP</acronym> [0:0] | |--HTML_ELEMENT_OPEN -> <acronym title="as soon as possible"> [0:0] | | |--OPEN -> < [0:0] | | |--HTML_TAG_NAME -> acronym [0:1] | | |--WS -> [0:8] | | |--ATTRIBUTE -> title="as soon as possible" [0:9] | | | |--HTML_TAG_NAME -> title [0:9] | | | |--EQUALS -> = [0:14] | | | `--ATTR_VALUE -> "as soon as possible" [0:15] | | `--CLOSE -> > [0:37] | |--TEXT -> ASAP [0:38] | `--HTML_ELEMENT_CLOSE -> </acronym> [0:42] | |--OPEN -> < [0:42] | |--SLASH -> / [0:43] | |--HTML_TAG_NAME -> acronym [0:44] | `--CLOSE -> > [0:51] `--EOF -> <EOF> [0:52] More examples: Checkstyle GUI allows to showing javadoc tree in java files. To run in use java -cp checkstyle-7.4-all.jar com.puppycrawl.tools.checkstyle.gui.Main and choose "JAVA WITH JAVADOC MODE" in dropdown list in bottom of frame. Now you can see parsed javadoc tree as child of comment block. Notice that only files with ".java" extension can be opened. For detail reference you can see Checkstyle GUI documentation. Java checks controlled by method setTokens(), getDefaultTokens(), getAccessibleTokens(), getRequiredTokens(). JavaDoc checks use the same model plus extra 4 methods for Javadoc tokens. As Java AST and Javadoc AST are not binded. It is highly recommended for Javadoc checks do not use customization of java tokens and expect to be executed only on javadoc tokens. There are four methods in AbstractJavadocCheck class to control the processed JavadocTokenTypes - one setter setJavadocTokens(), which is used to define a custom set (which is different from the default one) of the processed JavadocTokenTypes via config file and three getters, which have to be overridden: getDefaultJavadocTokens(), getAcceptableJavadocTokens(), getRequiredJavadocTokens(). Tags with optional(ommitable) end tag: Void tags:Note: "Nested"/"Non-Nested" is not applicable for this type of tags - all of them are looks like Non-Nested. Flas "hasUnclosedTag" is "false" for all cases.
http://checkstyle.sourceforge.net/writingjavadocchecks.html
CC-MAIN-2017-04
refinedweb
1,830
54.93