Intersection of Cantor Sets

I will write about an interesting talk given by Pablo Shmerkin.

Main goal: computing dim(A\cap B) for A,B\subset \mathbb{R} with dynamics properties.

Theorem [Marstrand, 1954] E\subset\mathbb{R}^2 be a Borelian set. Given \theta\in (0,\pi), we define L_{\theta} to be the family of all the lines on the plane with slope \theta. Then for almost all l\in L_{\theta} we have that dim(E\cap l)\leq \max (dim(E)-1,0).

The graph G of a Brownian motion on the plane has dim(G)=\frac{3}{2}, however, vertical lines have intersection of HDIM equal to zero.

Theorem [Marstrand, 1954] E\subset\mathbb{R}^2 Borelian set. ess \mbox{ } \sup dim(E\cap l)=\max (dim(E)-1,0).

We can obtain the following corollaries for the HDIM of the intersection of subsets A,B\subset \mathbb{R}.

Corollary Let A,B\subset\mathbb{R} be Borelian set. Then for Leb almost all t we have that dim(A\cap (B+t))\leq \max (dim(A\times B)-1,0).

Remember that dim(A\times B)\geq dim(A)+dim(B), and that there is equality if dim(A)=\overline{dim_{BOX}(A)}. Therefore, we have the following corollaries.

Corollary If A\subset\mathbb{R} is a Borelian set such that dim(A)=\overline{dim_{BOX}(A)}, then for Leb almost all t we have that dim(A\cap (B+t))\leq \max (dim(A)+dim(B)-1,0).

And from the second theorem of Marstrand, we have the following corollary.

Corollary ess \mbox{ } \sup dim(A\cap g(B))=\max (dim(A\times B)-1,0), where g:\mathbb{R}\to\mathbb{R} is lineal.

Some examples.

There exists compact A\subset \mathbb{R}, dim(A)=1 such that the cardinality of A\cap(A+t)\leq 1 such that dim(A\cap(A+t))=0 and \max (dim(A)+dim(B)-1,0)=1.

For all s\in[0,1], \exists A\subset \mathbb{R}, dim(A)=s and dim(A\cap (A+t))=s for all t\in\mathbb{R}. [Falconer example]

Furstenberg asked what happened if A,B were invariant for the map T_p(x)=px \mod1. Motivated by this questions, let us consider first the classic Cantor set C\subset \mathbb{R}, whose dim(C)=\overline{dim_{BOX}(C)}=\frac{\log 2}{\log 3}. Applying directly Marstrand’s theorem we obtain the bound dim(C\cap (C+t))\leq 2\frac{\log 2}{\log 3}-1\approx 0.262 for almost all t. There is however a better bound by Hawkes.

Theorem [Hawkes, 1975] dim(C\cap (C+t))= \frac{1}{3}\frac{\log 2}{\log 3}\approx 0.21 for almost all t.

Kenyon and Peres generalized further in 1991 the result to the case of p\geq 2, D_1,D_2\subset \{0,1,\ldots,p-1\}, A_i=\{\sum x_j p^{-j}: x_j\in D_i\}. Notice that C=\{\sum x_j 3^{-j}: x_j\in \{0,2\}\}.

Theorem [R. Kenyon and Y. Peres, 1991] For almost all t\in [0,1] dim(A_1\cap (A_2+t))= \frac{\lambda}{\log p}. Where \lambda= greater Lyapunov exponent for the product of iid matrices M_r, where M_r(i,j)=\# (D_1+i+r)\cap (D_2+j+p), r=0,\ldots, p-1, 0\leq i,j\leq 1.

Barany and Rams in 2014 considered the case of self-similar sets E\subset [0,1]^2 that are obtained by a generalization of the set C\times C (ex. consider the unit square [0,1]?2, divide it in 9 squares of side 1/3, numerate the squares by \{1,2,\ldots,9\}=\{0,1,3-1\}^2. Choose 4 squares, say \Omega=\{1,2,5,9\}, delete the rest of the squares and repeat the procedure on each of the 4 squares left, etc.).

Theorem [B. Barany and M. Rams, 2014] Let E\subset \mathbb{R}^2 a set constructed doing the described process where \Omega\subset \{0,1,\ldots,p-1\}^2, p\nmid \#\Omega and \#\Omega>p. If \tan \theta \in \mathbb{Q}, then \exists \alpha(\theta),\beta(\theta) with \alpha(\theta)>dim(E)-1>\beta(\theta), dim(E\cap l)=\alpha(\theta) for \nu-almost all l\in L_{\theta}, where \nu is a natural measure with support on E and dim(E\cap l)=\beta(\theta) for Leb-almost all l\in L_{\theta}.

The non rational case was recently studied by Pablo.

Theorem [P. Schmerkin, 2016] Let E\subset \mathbb{R}^2 a closed set and invariant for T_p(x,y)=(T_p(x),T_p(y)). If \tan \theta \notin \mathbb{Q} then dim(E\cap l)\leq \max(dim(E)-1,0) for all l\in L_{\theta}.

The proof of the theorems follows from an application of a tool now known as the BSG (Balog-Szemer\’edi-Gowers) theorem, which has found many further applications. In the words I can remember… is a theorem in combinatorics proved with graphs that allows to have a structure on the integeres, it allows to find lower bounds by convoluting indicatrix functions and obtained a l^2 norm that can by bound with an l^1 norm apart from a small set. Remember that the BSG theorem is the main ingredient of the 1998 proof of the first effective bounds for the Szemer\’edi theorem, showing that any subset A\subset\{1,2,\ldots,N\} free of k-term arithmetic progression has cardinality O(N(\log \log N)^{-c_k}) for an appropriate c_k>0.

Notice that the theorem works in particular for E=C\times C. Pablo was not convinced of the conjecture that indeed dim(E\cap l)\in \{dim(E)-1,0\}.

As a corollary we conclude that if \lambda\notin\mathbb{Q}, then for all t\in \mathbb{R}, dim(C\cap (\lambda C +t))\leq 2\frac{\log 2}{\log 3}-1.

The following theorem was obtained independently in 2016 by P. Schmerkin and M. Wu:

Theorem [P. Schmerkin, 2016] If p and q are not powers of the same integer, A,B are closed sets, T_p,T_q-invariants, then
dim(A\cap (\lambda B+t))\leq \max(dim(A)+dim(B)-1,0) for all \lambda\neq 0, t\in\mathbb{R}.


Is human behaviour mathematically predictable?

This is such a broad question that nobody can even pretend to answer it. Instead, a strategy is to try to solve partial problems, like, human: spending/investment behaviour, mobility, social public/private behaviour, etc…

For example, Albert-László Barabási et al. concluded that with 93% of efficiency, the future whereabouts of individuals can be determined for an horizon of time of one hour, based only in the previous trajectory.  Randomness and unpredictability seems therefore not to be related to short-time mobility patterns, and therefore, it should be possible to predict related human behaviour, like, traveling times, fuel consumption, contagious spreading diseases, place’s popularity, building usages, etc…

Paradoxically, I wonder, if is it true that the more options a person has the most predictable its behaviour becomes, and the opposite direction, the less options a person has the most unpredictable its behaviour is.

Headlight Guards


Picture from

Why headlight guards are so simple?

If they are supposed to protect the headlights then why they don´t have a more efficient geometry?

I was some long time ago astonishing about how simple headlight guards were, I now think I should have probably asked some jeep owner but I did not know anyone who was using any kind of headlight guard.

My short answer would be the following: The resistance of the glass (or plastic or whatever) of the headlights is known, therefore, it is possible to estimate the probability that an object of known: material, volume, mass, geometry and velocity breaks the headlight. I guess an accurate model may be difficult to build, so starting from experimentation with different objects, I guess the conclusion was that the volume of the object was fundamental, so small volumes cannot break the headlight so it was not necessary to have complex geometries for the guards. Anyway, I wonder how far this answer if from the reality? Can anybody provide a better answer?

-H- Problem

-H- Problem is a problem motivated by a road configurations that constitutes a rather typical pattern of streets intersection.

The -H- configurations consists of the following intersection of streets:

.                                                 N


.               | Street 1|                                    |Street 2|

.              |                |                                    |              |

————                     ——————————                  ——————–

W  Street 3                     Street 4                                 Street 5         E

———–                     ——————————                 ——————-

.              |              |                                     |              |

.              |Street 6|                                     |Street 7|


.                                                    S

The problems consists in coordinating car’s traffic simultaneously along the 7 streets of the picture in order to minimise the congestion. You must allow cars to travel from any parts of the city (North, South, West, East) to travel to any other part of the city (North, South, West, East).

This problem is far from being solved from a practical point of view.

Some long time ago I was in a bus along Street 7 going North from the south. In the configurations I was:

  • Street 3, 4 and 5: W<->E
  • Street 1 and 6:  N->S
  • Street 2 and 7:  S->N

And additionally, we could turn:

  • From Street 7 to Street 4 to the left.
  • From Street 7 to Street 5 to the right
  • From Street 5 to Street 2 to the right
  • From Street 4 to Street 6 to the left
  • From Street 1 to Street 3 to the right
  • From Street 1 to Street 4 to the left

It was a nightmare … I waited a long time to go from Street 7 to Street 2, because:

Cars going from Street 1 to Street 6 blocked cars traveling W<->E between Street 3 and 4. As a consequence, cars traveling W<->E between Street 4 and 5 were blocking cars traveling S->N from street 7 to street 2.

In practice, the results of the current design was a stationary solutions with cars not allowed to move in any direction. Cars going S<->N were blocking and blocked by cars traveling W<->E, and cars traveling W<->E were at the same time blocking and blocked the first, traveling S<->N.

I saw at the time that was completely impossible for any driver to do anything useful as an individual, in order to travel along the wanted direction without taking any decision that would had a completely disastrous consequence for the cars going along other directions.

I guess it is sad to admit that the same problem is probably repeated every single day at exactly the same time, so an efficient solution to this apparently simple but common street configuration could have valuable impact for solving transport problems in crowded cities.

Mathematics is becoming old but preserving beauty

Nowadays mathematical modelling that has allows us to make a huge progress during the last century is letting us helpless to solve quickly new challenges in accurate ways.

The over-simplifications assumptions that engineering were used to work with in the early 1900 are not longer the best solutions that we can give, considering the improvements in velocity of direct computing greedy algorithms that in many realistic scenarios can behave better than over-simplificated mathematical approaches.

This requires a new way of approaching problems that considers a huge amount of available previous research on similar lines. This phenomena is present in mathematical education among many others.

It seems that the level of abstraction at which extends modern mathematics works is not longer at the level of practical problems that arise in our daily life, but that approaches toward more abstract scenarios. On the other side, mathematics seems to be too rigid to adapt techniques to difficult problems for which technology can provide huge amount of variables and sufficiently accurate approximate solutions. The most elemental example is the plane industry. Mathematicians are still not been able to prove that a plane can flight, but engineering have been able to make plane flighting better. It is happening a phenomena that, not being an expert, I read about philosophy at the beginning of the 20th century. A big deception of the way philosophy was made. It attempted to solve such difficult problems that finished finding non-useful answers. Mathematics won over philosophy, by fixing the basis, and having more concrete goals. However, it seems the need of precision in maths is given us a huge price to pay in modern days,  as this translates into spending a huge amount of time. It seems to me, that the time required to find a solution of a real problem mathematically can be in many in other scale of magnitude of time than finding a good approximate solution using available technology. The combination of different sciences, is giving a huge time shortness in finding solutions to real problems. Mathematics is evolving slower, as an isolated group, because it seems that time has assumed a greater value compared to intellect.

Maybe, in the future, the unique reasson to study mathematics will be to admire its honest beauty. Efficiency was a borrow term believed to belong to optimisation, a mathematical object which beautifulness was beyond what computer scientists found interesting and were able to use it to defeat intellect, by using fast machines and clever algorithms that were built on a cooperative business.

El problema de la funda de auto

Ayer me vi enfrentado al siguiente problema que me pareció interesante.

Se tiene un pequeño auto cubierto enteramente con una gran funda, claramente diseñada para un auto más grande. Corre mucho viento, tanto así, que cada cierto rato la funda deja completamente al descubierto el auto.

Supongamos se asume que el auto, cuya geometría se conoce, se encuentra sobre una superficie horizontal de la tierra, se conoce además la geometría  y el material de la funda, y la velocidad del viento en cada punto sobre una superficie semi esférica de radio suficiente grande para incluir en su interior al auto y la funda, mientras ésta aún no deje al auto completamente descubierto.

Es posible, bajo todos los supuestos mencionados, determinar el tiempo necesario para que el auto quede al descubierto?

La solución de este problema, parte por considerar el caso más elemental imaginable. Consideremos la geometría del auto es semi esférica con radio r_1, y lo es también la funda con radio r_2>r_1, supongamos se conoce la velocidad del viento sobre la superficie semi-esférica de radio 2r_2. Suponemos el coeficiente de roce entre el auto y la funda es cero, y que la funda se comporta como un fluido con la misma densidad del aire.

Bajo estos supuestos, el problema puede describirse parcialmente! [KURT N. JOHNSON, CIRCULARLY SYMMETRIC DEFORMATION OF SHALLOW ELASTIC MEMBRANE CAPS, VOLUME LV, NUMBER 3, SEPTEMBER 1997, PAGES 537-550].

A random meeting with a formal ergodic theorist

Most of the people, say more than 99.999999% of the population, has no idea of what Ergodic theory is. Therefore, if you meet someone randomly, you may be sure that he/she does not have idea at all of what it does mean. Among many other examples you can tell in order to introduce the concept of ergodicity, today,  Jana Rodríguez Hertz told us one that was quite nice and that I would not like to keep to myself.

The example Jana told us was the following:

Suppose you want to know the number of fish in a lake, say n. It is extremely difficult to count them, however you can estimate n. In order to do this, you can take p fish and mark them, then allow them to return to the lake. After some time, you take m fish and you count how many are marked. If  you assume that after enough time the proportion of marked fish is the same at “every scale”, i.e., if you take any amount k of fish you will always obtain a number r(k) of marked ones so that \frac{r(k)}{k}=\frac{p}{n}, then you can estimate n by \frac{p m}{r(m)}, where r(m) is the number of marked fish you obtained when you took in total m fish. Mathematically the validity of this statement is justified if you have the mixing property. This is a strong condition of the system.

Another strategy for estimating n is to capture m fish each 10 days, count how many are marked, and then allow them to return to the lake. Suppose the i-th time you captured fish you obtained r_i marked ones. If you do this many times, say N, then you can estimate n by \frac{Np}{\sum_{i=1}^N\frac{r_i}{m}}= \frac{Npm}{\sum_{i=1}^N r_i}.  Mathematically the validity of this statement is justified by the ergodic property. This is weaker than the mixing condition of the system needed in the previous estimation strategy.

In formal terms you may introduce the concept of ergodicity as the behavior of a system where the second estimation approach does work. Sadly or luckily, who knows, ergodicity is unlikely to occur in most of the dynamical systems in nature :).