Overall, the present meta-analysis does not support periodic CHO restriction as a superior approach for enhancing endurance performance in well-trained athletes. Thus, the physiological stimuli prompted by undertaking an acute exercise bout with low CHO availability (as observed in acute exercise studies [28,29,30,31]) does not translate into clear measurable enhancements of performance in already adapted endurance-trained athletes compared to training with high CHO availability (Fig. 2).

This overall meta-analysis was based on the effect on endurance performance in nine studies of well-trained endurance athletes and the evaluation of the methodological quality revealed that these studies achieved five to seven of 10 points on the PEDro scale (Table 1). This scale has previously been interpreted so that studies scoring lower than four points are considered to be of “poor” quality, four to five points “fair” quality, six to eight points “good” quality, while nine to 10 points indicates excellent methodological quality [48]. Overall, the nine studies received an average of 5.8 points, and all the included studies are categorized as either “fair” or “good” quality studies (see Table 1).

In general, nutritional studies like these are difficult to blind, at least to athletes and therapists administering the diets, and accordingly, these two points were not rewarded to any of the studies. Due to this inherent difficulty of blinding nutritional studies, evaluations of the PEDro score can consequently lead to misinterpretations of the scientific merit of these studies. Therefore, such studies must also be interpreted in this context, and with this inherent limitation in mind. The lack of blinding increases the risk of bias, and few studies have circumvented this by allocating the athletes to their preferred treatment [6, 8]. However, this also implies that the criterion of randomization was not met in these studies, which increases the risk of selection bias and bias due to confounding.

Training or recovery with restricted CHO availability may in practice be achieved by numerous combinations of training and dietary interventions but the nine studies identified by the present systematic search can principally be divided into three overall categories. One group of studies [43, 44, 46] employed a “sleep-low strategy” where the CHO intake was restricted between a depleting session in the afternoon and a “train-low” session the subsequent morning. Interestingly, two of the three studies in this category [43, 44] reported superior effects on endurance performance in the groups training with CHO periodization, but as discussed below, this may, at least in part, relate to adaptations not related to “train-low” per se. Another group of studies [40, 41, 45] included two daily training sessions of which the first was intended to deplete muscle glycogen. This session was followed by a CHO restricted recovery period whereby the second session commenced with low CHO availability. All studies in this category reported no superior effect on endurance performance in the CHO restricted groups compared to groups consuming a high CHO diet between the sessions [45] or training once every day with replenished muscle glycogen [40, 41]. The last three studies incorporated CHO periodization by either training with restricted CHO intake or restrictive strategies applied during both training and recovery [6, 8, 42]. All three studies reported no additional improvements of performance in comparison to control groups with a high CHO intake. Although no collective evidence for CHO periodization was shown by the present analysis, it is relevant to nutritionists, coaches and athletes to evaluate the different strategies and identify benefits and side-effects that may affect the overall training outcome.

Training after overnight recovery with reduced CHO intake

Interestingly the two only studies displaying beneficial overall effects from CHO restriction belong to this category and were conducted by Marquet and colleagues [43, 44] (Table 1). Both studies utilized the sleep-low approach with high-intensity training in the afternoon followed by moderate intensity cycling the subsequent morning (i.e., presumably with low muscle glycogen, although not verified by measures in the two studies). Following 3 weeks of training, 10 km run time, preceded by 40 km preload cycling, was improved by 3% in the sleep-low group [43], whereas it remained unchanged in the control group. Similarly, time to complete a 20 km time-trial was improved by 3% solely in the sleep-low group following an intervention lasting only one week [44]. Unfortunately, potential underlying physiological mechanisms were not investigated in these studies, but as mentioned by the authors, the superior effect after only one week of training may be explained by super-compensated muscle glycogen stores [44]. This notion was supported by numerical increases in the total energy and CHO intake in the CHO periodized group during the one-week training period and the concomitant numerical reductions in the CHO-fed control group. Concerning the study lasting 3 weeks, a significant weight loss was observed in the sleep-low group [43], which may have contributed to the observed change in running performance [49]. Moreover, rate of perceived exertion (RPE) at termination of the 120 min preload cycling was reduced only in the sleep-low group after the training period, suggesting that the improved running performance could also be attributed to a less demanding preload cycling. Endurance performance was not improved in the CHO-fed control groups of these two studies, which could indicate that the training intervention per se was either sub-optimal or that the group of athletes were “performance stable” after the two to three week lead-in periods. It was reported that RPE during the morning sessions was significantly higher in the sleep-low groups of both studies, which indicates that the internal training loads were different between groups and that the control groups could have tolerated a higher training load [43, 44]. If the training load was actually sub-optimal, these findings could indicate that “sleep-low” constitutes a beneficial superimposing strategy during periods with low-to-moderate training loads that per se elicit a submaximal training response. Using a comparable sleep-low approach three days a week for four weeks, Riis and colleagues were not able to demonstrate a superior effect of CHO periodization on endurance performance [46]. In their study, power output during a 30 min time-trial that were preceded by 90 min of preload cycling was improved similarly by 15–19% after receiving high and periodized CHO diets.

Restriction of CHO intake between two daily sessions

Conduction of two daily sessions can be another way of utilizing a depleting training session to commence both post-exercise recovery and a second training session with reduced CHO availability, although with a shorter recovery period between sessions (i.e., 1-7 h). Yeo et al. [40] and Hulston et al. [41] accomplished comparable studies, including six weekly cycling sessions, alternating between a prolonged session of moderate intensity and a HIIT session. Athletes trained either once every day with high muscle glycogen availability or twice every second day with 1-2 h recovery between sessions. By this approach, the β-HAD activity was solely increased in the groups training twice every second day, which was also the case for CS activity in the study by Yeo et al. [40]. Importantly, training twice per day was associated with a reduced intensity during the HIIT sessions, which may have lowered the training response and outweighed the superior enzymatic adaptations. Thus, endurance performance was not superiorly improved by commencing every second session with low muscle glycogen (Table 1). Importantly, the different training distributions between groups (i.e., once every day vs. twice every second day) leaves a question as to whether the enhanced enzymatic adaptations were due to the periodic CHO restriction or the different training schedules. A recent study in untrained individuals modified the approach by reducing glycogen availability prior to every second session in both groups [50]. Here, it was shown that mitochondrial adaptations were superior in the group training twice per day, indicating that differences in training distributions may have affected the metabolic adaptations, irrespective of differences in muscle glycogen.

We used an alternative approach with two groups training twice per day while consuming isocaloric diets between sessions containing either low or high CHO [45]. As in the above-mentioned sleep-low studies, HIIT was used to reduced muscle glycogen availability, entailing that external training loads were identical in both groups. Three days a week athletes performed high intensity cycling in the morning, recovered for 7 h with a high or low CHO intake and trained for 2 h at a moderate intensity in the afternoon. This intervention was superimposed to the routine training of endurance athletes and following the four-week training period, CS activity and endurance performance were increased to the same extent in both groups. Importantly, a check following the 7th day with CHO manipulation revealed high levels of muscle glycogen after training with the CHO restricted diet (i.e., 431 mmol·kg dw− 1). Since low glycogen levels seems to be a prerequisite in order to enhance the acute training response in endurance athletes [28, 29, 31], this could explain the absent superior effects in that study. This observation furthermore demonstrates a need for demanding interventions to repeatedly induce glycogen depletion in endurance athletes. In general, resting glycogen levels increase by endurance training and accordingly we have observed resting levels of up to 880 mmol·kg dw− 1 in highly trained endurance athletes [45, 51]. Together with a high metabolic flexibility [40, 46, 52], this obviously counteracts glycogen depletion during exercise. It has been suggested that glycogen levels of 250–300 mmol·kg dw− 1 are advantageous in order to provide a cellular environment that facilitates cell signalling [53], and consequently, it seems preferable to commence glycogen depleting sessions with moderate, rather than high, glycogen levels. Otherwise, the glycogen depleting sessions must include a certain amount of moderate-to-high-intensity exercise to reach low glycogen levels, and moreover energy restriction rather than just CHO restriction, may be necessary to maintain a reduced muscle glycogen availability during post-exercise recovery.

Alternating between different CHO periodization stimuli

Performing the same training protocol repeatedly can lead to a gradual reduction in the acute training response emphasizing a general need for varying the training stimulus [38, 54, 55]. This may also apply to CHO periodization, and to prevent this potential plateau phenomenon, a mixture of different types of training and/or CHO restrictive strategies seems reasonable. In this context, Cox et al. [42] conducted a training study with two groups of cyclists and triathletes completing routine training (e.g., hill rides, HIIT session, prolonged sessions) (Table 1). In one group, CHO was consumed before and during each training session, whereas the other group fasted for 2 h prior to each training session and for the initial 90mins of the extended sessions. Following training, CS activity was increased only in the group training with high CHO availability, while time to complete a 7 kJ·kg bw− 1 time trial following 100 min of preload was similarly improved in both groups (Table 1). In contrast, β-HAD activity remained unchanged in both groups. In other studies by Burke and colleagues [6, 8], elite race walkers completed a period of routine intensified training with either high CHO availability or with alternation between different CHO periodizing strategies (i.e., “fasted training”, “twice-a-day” training, and “sleep-low”) (Table 1). Following 3 weeks of training, no superior effect of CHO periodization was observed on 10 km race time and improvements were comparable to those observed following training with high CHO availability.

Overall, seven of the nine studies have revealed that training with CHO periodization is not superior in terms of endurance performance when compared to a high CHO diet in highly adapted athletes. In addition to the above-mentioned possible explanations of these similarities, it cannot be excluded that the conduction of glycogen depleting sessions in the CHO-fed control groups did initiate cell signalling that was sufficient to induce adaptations comparable to those in the CHO periodized group. Some acute findings in highly trained endurance athletes support this idea by demonstrating similar increases in markers of mitochondrial biogenesis during recovery from glycogen depleting sessions, irrespective of the CHO intake post-exercise [30, 56]. Another explanation could be that the training interventions per se did fully exploit the adaptive response in most of the included studies, thus averting further improvements by CHO periodization (Table 1). In support hereof, significant improvements in endurance performance of 5–19% were observed in seven of the nine control groups training with high CHO availability [6, 8, 40,41,42, 45, 46] (Table 1). Performance improvements of such magnitudes could likely result from the use of intensified training programs, which was the case in at least three of the studies [6, 8, 45]. During periods containing this amount of high intensity training (e.g., 24x5min HIIT per week) it may be particularly difficult to achieve an additive training effect of CHO periodization. However, due to different periodization strategies, these intense periods are often conducted during certain parts of the season and surrounded by less intense periods where athletes are likely more stable in terms of performance and perhaps more responsive to “train-low” interventions [57, 58].

Relevance of CHO periodization in highly trained endurance athletes

Elite coaches balance multiple components to optimize the overall physiological and mental stress among athletes (e.g., volume, intensity, distribution and recovery). Since superimposition of periodized CHO restriction onto routine training will likely affect the priority of other important parts of training, the potential benefits hereof must be carefully considered. In particular, the comprehensive strategies presented in the literature may be challenging to implement routinely among endurance athletes training 20–30 h each week (e.g., cyclists and triathletes). In this regard, it is worth noticing that the four studies using designs that reflect the actual training of elite endurance athletes have all shown that the effects of “real-life” training on performance were not augmented by CHO periodization [6, 8, 42, 45]. Also, many endurance athletes may already (in their habitual training) be exposed to prolonged exercise bouts with low glycogen levels towards the end or achieve the “train-low” effects during periods with frequent training bouts. Thus, endurance athletes are presumably already somewhat adapted to training with low CHO availability, and the potential for further effects from periodic CHO restriction may be limited, especially for those undertaking very long training bouts with low or no CHO intake. Such unconcious “exposure” to low CHO availability among endurance athletes can be caused by the difficulty of balancing energy intake and expenditure during periods with frequent training sessions and high training loads. While this may lead to an enhanced training response in some cases, low energy availability during prolonged periods may cause negative health-effects that will eventually compromise performance in other cases (e.g., endocrine perturbations and impaired bone health) [59]. This emphasizes the necessity of paying attention to the overall energy balance when introducing deliberate CHO restriction in endurance athletes with high training loads. In contrast, athletes with modest total training volumes and especially those consuming large amounts of CHO before and during training and recovery may be more likley to benefit from CHO periodization.

Based on the current literature, the use of CHO periodization is not clearly beneficial in elite endurance athletes. However, since both the intervention periods (i.e., one to four weeks) and performance tests (i.e., ≤ 2 h) used in the existing studies have been relatively short, it is too early to draw definitive conclusions. Except from a few studies [6, 8, 45], the training volume has also been lower than normal for elite endurance athletes in similar disciplines and the training intensities have often been clamped, which is deviating from the nature of “real-life” training in elite endurance athletes. In this regard and considering the relatively small potential effect that can be expected on performance in highly trained athletes, studies that superimpose tolerable CHO periodization approaches onto months of routine training are warranted.

As discussed above, a deliberate implementation strategy is important in order to avoid the potential negative consequences of training with low CHO availability. Thus, compromising training intensity by CHO restriction could be detrimental in endurance sports that are characterized by decisive periods of high exercise intensities (e.g., marathon running, road cycling and triathlon) and emphasizes the importance of prioritizing training quality in order to acquire training adaptations that promote the ability to perform such exercise [60,61,62]. Accordingly, the implementation of CHO periodization should be considered on both the macro-cycle level and micro-cycle level. As mentioned above, the relatively performance stable periods with low amounts of high intensity training may be preferable in terms of incorporating CHO periodization, and moreover, the risk of compromising training intensity is reduced during these periods. During training phases including larger amounts of high-intensity exercise CHO restriction must, on the other hand, be carefully incorporated to ensure enough resources to perform high-intensity training as well as appropriate recovery from this. In this regard, and based on the significant improvements observed in the control groups of the studies included in this review, it is likely that the conduction of traditional HIT sessions in a CHO-fed state elicits a sufficient near-maximal or maximal acute training response both on the muscle cell level and the cardiovascular level. This further supports the idea of incorporating “train-low” in training at lower intensities and future studies should examine whether CHO periodizing strategies proves beneficial during prolonged less intense training periods.

Finally, low CHO availability may reduce the training intensity during not only high-intensity exercise, but also moderate intensity exercise, which may compromise important training adaptations. Strategies that can partly or fully help to maintain the training intensity during “train-low” interventions are therefore warranted. For instance, it would be interesting to know if and to what extent CHO provision during training in a glycogen depleted state can rescue the training intensity (e.g., by reducing central fatigue) and how this affects the myocellular signalling in highly trained athletes.