file
stringlengths 13
16
| year
int64 2.02k
2.02k
| label
int64 0
2
| text
stringlengths 33
437k
| page_no
int64 0
152
| bbox
sequence |
|---|---|---|---|---|---|
sNuFKTMktcY.pdf
| 2,022
| 0
|
LINEBREAK active hierarchical exploration with stable subgoal representation learning LINEBREAK siyuan li†, jin zhang†, jianhao wang†, yang yu‡, chongjie zhang† †tsinghua university, ‡nanjing university {sy-li17,jin-zhan20,wjh19}@mails.tsinghua.edu.cn {yuy}@lamda.nju.edu.cn, [email protected] LINEBREAK abstract LINEBREAK goal-conditioned hierarchical learning (gchrl) provides a reinforcement promising approach to solving long-horizon tasks. recently, its success has been extended to more general settings by concurrently learning hierarchical policies and subgoal representations. although gchrl possesses superior exploration ability by decomposing tasks via subgoals, existing gchrl methods struggle in temporally extended tasks with sparse external rewards, since the high-level policy learning relies on external rewards. as the high-level policy selects subgoals in an online learned representation space, the dynamic change of the subgoal space severely hinders effective high-level exploration. in this paper, we propose a novel regularization that contributes to both stable and efficient subgoal representation learning. building upon the stable representation, we design measures of novelty and potential for subgoals, and develop an active hierarchical exploration strategy that seeks out new promising subgoals and states without intrinsic rewards. experimental results show that our approach significantly outperforms state-of-the-art baselines in continuous control tasks with sparse rewards. LINEBREAK introduction LINEBREAK goal-conditioned hierarchical reinforcement learning (gchrl) has long demonstrated great potential to solve temporally extended tasks (dayan & hinton, 1993; schmidhuber & wahnsiedler, 1993; vezhnevets et al., 2017; p´er´e et al., 2018; nair & finn, 2019), where a high-level policy periodically sets subgoals to a low-level policy, and the low-level policy is intrinsically rewarded for reaching those subgoals. early gchrl studies used a hand-designed subgoal space, such as positions of robots (nachum et al., 2018; levy et al., 2019) or objects in images (kulkarni et al., 2016a). to alleviate the dependency on domain-specific knowledge, recent works investigate learning subgoal representations along with hierarchical policies (nachum et al., 2019a; dilokthanakul et al., 2019; li et al., 2021), which have shown promise in a more general setting. although the exploration ability of hrl is boosted via decomposing tasks and reusing low-level policies, existing gchrl methods struggle in long-horizon tasks with sparse external rewards. the high-level policy learning in gchrl relies on external rewards, and those tasks are hard to solve with naive high-level exploration strategies. furthermore, as the high-level policy makes decisions in a latent subgoal space learned simultaneously with the hierarchical policy, the instability of the online-learned subgoal space results in severe non-stationarity of the high-level state transition function and hinders effective high-level exploration. LINEBREAK to address these challenges, we develop an active hierarchical exploration approach with stable subgoal representation learning (hess). our approach adopts a contrastive objective for subgoal representation learning inspired by li et al. (2021). to learn a subgoal space that could effectively guide hierarchical exploration, we propose a novel state-specific regularization that contributes to both stable and efficient subgoal representation learning. the proposed regularization constrains the changes for those embeddings that have already well satisfied the representation learning objective. as the hierarchical agent gradually expands its exploration to new state regions, the regularization allows for updates of the embeddings that underfit the learning objective. benefiting from the proposed regularization, we could improve the representation learning efficiency without hurting its LINEBREAK stability via the prioritized sampling technique (hinton, 2007), which prioritizes training samples with larger losses. LINEBREAK building upon our stable subgoal representation learning, we design an active exploration strategy for high-level policy learning. as shown in previous work (strehl & littman, 2008; bellemare et al., 2016; ostrovski et al., 2017), the visit count provides a simple and effective novelty measure for exploration. however, with online subgoal representation learning, the visit count for subgoals is defined in a changing space. therefore, such a novelty measure alone is not sufficient for efficient high-level exploration, although the stability regularization could improve its accuracy. our insight is that desirable novel subgoals should be reachable and effectively guide the agent to unexplored areas. thus we design a novel potential measure to regularize the novelty measure, which indicates the reachability to the neighbor regions of the visited state embeddings. with the regularized novelty measure, our proposed active exploration strategy directly selects state embeddings with high potential and novelty as subgoals without introducing intrinsic rewards. it is more efficient than the reactive exploration methods that need to learn how to maximize the intrinsic rewards before performing exploratory behaviors (tang et al., 2017; pathak et al., 2017; burda et al., 2018), since in the active exploration approach, the subgoal measures have direct effects on the behavior policy. furthermore, the active exploration strategy naturally avoid introducing additional non-stationarity (i.e., induced by dynamically changing intrinsic rewards) into hrl. LINEBREAK we compare the proposed method hess with state-of-the-art baselines in a number of difficult control tasks with sparse rewards. note that the environment setting in this paper is much more challenging to exploration than the multi-task and deceptive dense-reward ones in the baselines. experimental results demonstrate that hess significantly outperforms existing baselines. in addition, we perform multiple ablations illustrating the importance of the various components of hess. LINEBREAK background LINEBREAK we consider a markov decision process (mdp) defined as a tuple (s, a, p, r, γ), where s is a state space, a is an action space, p (s(cid:48)|s, a) is an unknown transition function, r : s × a → r is a reward function, and γ ∈ [0, 1) is a discount factor. let π(a|s) denote a stochastic policy over actions given states. the objective of reinforcement learning (rl) is to learn a policy that maximizes the expected cumulative discounted rewards: maxπ ep,π[(cid:80)t t=0 γtr(st, at)], where t denotes the horizon length. LINEBREAK figure 1: the hierarchical framework of gchrl. LINEBREAK in gchrl, a two-level hierarchical policy πhier is composed of a high-level policy πh(gt|st) and a low-level policy πl(at|sl t, gt), as illustrated in figure 1. a latent subgoal space g is abstracted by a representation function φ(s) : s → rd. the high-level policy πh samples a subgoal gt from a neighborhood ∆(φ(st)) of the current latent state φ(st) every c steps, i.e., when t ≡ 0 (mod c): ∆(φ(st)) = {gt ∈ g(cid:12) (cid:12)d(gt, φ(st)) ≤ rg}, where c is a fixed constant, d is a distance function, and rg is a radius of the neighborhood. the sampled subgoal is repeated, gt = gt−1, when t (cid:54)≡ 0 (mod c). πh is trained to optimize the expected extrinsic rewards. the low-level controller takes a primitive action at ∈ a every step, and is intrinsically rewarded to reach subgoals set by the high level, rl t = −d(gt, φ(st)). to provide dense non-zero rewards for low-level policy learning, we employ l2 distance as d. furthermore, to keep the low-level reward function rl stationary while learning φ, we concatenate states and state embeddings together as the low-level states, sl = s||φ(s). previous work (li et al., 2021) minimizes a contrastive triplet loss ltri to learn φ(s), where positive LINEBREAK pairs are adjacent states in trajectories, and negative pairs are states c steps apart. LINEBREAK ltri(st, st+1, st+c) = ||φ(st) − φ(st+1)||2 + max(0, δ − ||φ(st) − φ(st+c)||2), LINEBREAK where δ is a margin parameter. in this work, we also adopt the contrastive loss in equation 1. LINEBREAK method LINEBREAK in gchrl, the low-level policy is optimized with intrinsic rewards generated by subgoals, but external rewards for the high level are still sparse and hard to explore. furthermore, the changing subgoal space makes it even more challenging for high-level exploration. in this section, we first present a novel regularization that contributes to stable and efficient subgoal representation learning, and then introduce two measures for subgoals, novelty and potential. finally, we design an active hierarchical exploration strategy based on these two measures. LINEBREAK stable subgoal representation learning LINEBREAK both stability and efficiency are crucial to subgoal representation learning. the stability of subgoal representation contributes to the stationarity of both the high-level transition function and the low-level reward function. meanwhile, a fast learned subgoal representation could provide effective guidance to exploration. however, stability seems at odds with efficiency, e.g., training neural networks with smaller learning rates leads to better stability but is slower (goodfellow et al., 2016). in this subsection, we develop a novel state-specific regularization that resolves the stability-efficiency dilemma in subgoal representation learning. LINEBREAK to achieve stable representation learning, we propose a regularization ls which restricts the representation change during each update. ls works by anchoring the current representation φ(s) to the old subgoal representation φold(s) before the update as follows: LINEBREAK ls = es∼b[λ(s)||φ(s) − φold(s)||2], LINEBREAK where b is a replay buffer, and λ(s) is a function that controls the weight of regularization for different states. states with smaller representation losses fit the learning objective well, so λ(s) should be larger for those states. in practice, before each representation update, we rank the triplets in the buffer with ltri(st, st+1, st+c). for the top k% of the triplets with the minimum representation losses, we set λ(s) = λ0 > 0 for the anchor states (st) in these triplets, and for the other states, λ(s) = 0. LINEBREAK the stability regularization enables us to use prioritized sampling (hinton, 2007) to improve representation learning efficiency without hurting its stability. the overall loss for subgoal representation learning is: LINEBREAK lφ = e(st,st+1,st+c)∼bp [ltri(st, st+1, st+c)] + ls, (3) where bp is a prioritized replay buffer, and states with larger ltri have higher probabilities of being sampled for training. LINEBREAK measures for subgoals LINEBREAK novelty measure: inspired by count-based exploration methods (strehl & littman, 2008; bellemare et al., 2016; ostrovski et al., 2017; tang et al., 2017), we formulate the novelty of subgoals with visit counts. as desirable subgoals should incentivize the agent to explore faraway novel states, we consider both immediate counts n(φ(si)) and expected cumulative counts of future states, and the novelty measure n (φ(si)) is defined as follows: LINEBREAK n (φ(si)) = eπhier [ LINEBREAK γjn(φ(si+jc))]. LINEBREAK in practice, we partition the low-dimensional continuous latent space into discrete cells, i.e., the state embeddings are mapped into cells containing them. by maintaining how many times each cell is visited, we could estimate the visit count n(φ(s)). in practice, the novelty measure is approximated with the data from the replay buffer, and the implementation detail is described in appendix b. LINEBREAK potential measure: with online representation learning, the novelty measure is a mixture of counts in the past and current representation spaces, so it might mislead the exploration, as demonstrated in figure 3. our insight is that desirable novel subgoals should be reachable and effectively guide the agent to unexplored areas. therefore, we design a potential measure for subgoals to regularize the novelty measure. in the following, we first introduce a subgoal generation mechanism, which is involved in the definition of the potential measure. LINEBREAK to guide the low-level controller to reach unexplored states, the subgoals pursued by the low level had better be in unexplored areas as well. therefore, we propose to add some perturbations to the subgoal gt selected from the replay buffer and obtain an imagined subgoal ge, and then pass ge to the low-level policy. to enable ge in an unexplored or less explored area, the perturbation is conducted as extending gt in the direction of gt − φ(st), as illustrated in figure 2. LINEBREAK figure 2: a schematic illustration of the subgoal selection and perturbation. LINEBREAK the imagined subgoal is ge = gt + de(gt − φ(st))/||gt − φ(st)||2, where de denotes an extended distance. as ge is imagined, and may have not been visited before, it could be inherently unreachable due to the transition function of the mdp or online representation learning, e.g., there may be obstacles in navigation tasks. to encourage the agent to explore in promising and reachable directions, we define a measure of potential u (gt) for the selected subgoal gt as the expected negative distance between the ending state φ(st+c) and ge: LINEBREAK u (gt) = est,at,...,st+c[−d(φ(st+c), ge)], where LINEBREAK st ∼ ρh, at ∼ πl(at|sl LINEBREAK t, ge), st+1 ∼ p (st+1|st, at). LINEBREAK ρh is the high-level state distribution under the hierarchical policy πhier. building on (li et al., 2021), the representation learned with the contrastive objective preserves temporal coherence, and the distances between nearby features approximately represent the number of transitions between them (oord et al., 2018). hence, with higher potential u (gt), ge is more reachable, and thus exploring the direction of gt is more promising to expand the explored areas. the potential u (gt) in equation 5 is estimated from the data in buffer as well. to calculate the novelty, we partition the continuous representation space into discrete cells. similarly, we maintain the potential of a cell by averaging the potential of features in that cell, and use the potential of the cell to represent that of the states inside it. LINEBREAK figure 3: visualization of visitation density and potential in the ant maze task. (a) visitation density in the x, y coordinate space of the ant robot. (b) visitation density in the subgoal representation space. (c) feature changes between 0.15m and 0.2m steps for the same batch of states. (d) potential for the sampled state embeddings. (e) combination of novelty and potential in section 3.3. our method selects state embeddings with darker colors as subgoals. LINEBREAK illustrative example: in figure 3(a) and 3(b), we visualize the visitation density at 0.2m steps in the ant maze task in figure 5. the visitation density is the counts normalized by the total number of transitions in the buffer. comparing figure 3(a) to 3(b), we found a mismatch between the density in the x, y coordinate space and the representation space, especially in the red box, since the updated LINEBREAK representation at 0.2m steps has projected the state embeddings to somewhere new, and the feature changes are noted by the black arrows in figure 3(c). furthermore, the counts in frontiers of the latent explored areas would not increase with more exploration, as the online learned representation keeps changing. with the potential measure, we could distinguish the promising novel areas from the unpromising ones, illustrated in figure 3(d). LINEBREAK active exploration strategy LINEBREAK in the reactive exploration methods with intrinsic rewards, the agent needs to learn how to maximize cumulative intrinsic rewards before performing exploratory behaviors. therefore, the effects of intrinsic rewards on behavior policy is indirect. furthermore, the changing intrinsic rewards would introduce additional non-stationarity into hrl. to make the proposed measures directly influence the behavior policy and avoid the non-stationary issue, we develop an active exploration strategy without intrinsic rewards for high-level policy learning. based on the two measures in section 3.2, the proposed exploration strategy selects and explores a novel subgoal gt with high potential, specified as follows: LINEBREAK gt = arg min LINEBREAK φ(s) LINEBREAK subject to LINEBREAK (cid:26) d(φ(s), φ(st)) ≤ rg LINEBREAK s ∈ b, LINEBREAK where st is the current state and α is a scaling factor. to balance these two measures more easily, we normalize future count n (φ(s)) by the total number of transitions in the buffer, and denote it with (cid:101)n (φ(s)). we visualize the proposed exploration incentive in figure 3(e). taking the state embeddings with darker colors as subgoals could lead the agent to expand the explored areas. LINEBREAK algorithm 1 provides the full procedure of the proposed approach, hess. to balance exploration and exploitation, hess probabilistically utilizes the exploration strategy in equation 6 or explores with the learned policy πh (line:5-6), and the probability p is decayed over the course of learning. the representation φ is updated every i episodes (line:13-15) so that φ is stable during the update interval, and the proposed regularization is to maintain the stability before and after the update. LINEBREAK if t ≡ 0 (mod c) then LINEBREAK algorithm 1 hess algorithm 1: initialize: πh(g|s), πl(a|sl, g) and φ(s). 2: for i = 1..num episodes do for t = 0..t − 1 do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: end if 15: 16: end for 17: return: πh, πl and φ. LINEBREAK end if rt, st+1 ← execute at ∼ πl(·|sl LINEBREAK end for if i ≡ 0 (mod i) then LINEBREAK gt = gt−1 LINEBREAK else LINEBREAK update φ with eq. 3 using m minibatches. LINEBREAK with a probability of p, explore with the strategy in eq. 6. with a probability of 1 − p, sample subgoal gt with learned policy πh. update πh with an off-policy rl method. LINEBREAK t, gt), and update πl with an off-policy rl method. LINEBREAK related work LINEBREAK learning effective subgoal representations has been an important and challenging problem in gchrl (dwiel et al., 2019). previous works have proposed to learn the subgoal representation in an end-to-end manner with policies (vezhnevets et al., 2017; dilokthanakul et al., 2019), or by bounding the sub-optimality of the hierarchical policy (nachum et al., 2019a), or with a learning objective of slow dynamics (li et al., 2021). nevertheless, those methods have not considered the stability of the subgoal representation learning, which results in a non-stationary high-level learning LINEBREAK environment. other prior methods utilize a predefined or pretrained subgoal space (p´er´e et al., 2018; nair & finn, 2019; zhang et al., 2020; ghosh et al., 2018) to keep the stability of the subgoal representation. however, those methods require task-specific human knowledge or extra training data. in this work, we propose a novel regularization to stabilize the subgoal representation learning. specifically, the subgoal representation is learned with a contrastive triplet loss. the contrastive objective is used as an auxiliary representation loss in other rl literature as well (oord et al., 2018; laskin et al., 2020; fortunato et al., 2019). LINEBREAK benefiting from temporally extended exploration, hrl methods have shown better exploration abilities (nachum et al., 2019b). bottom-up hrl works learn a set of diverse skills or options in a self-supervised manner, and use those semantically meaningful low-level skills to explore in downstream tasks (jinnai et al., 2020; co-reyes et al., 2018; eysenbach et al., 2018; sharma et al., 2019; li et al., 2019). nevertheless, the skills produced by those methods may not be required by the downstream tasks, since when learning the skills, the agent knows nothing about the downstream task rewards. in gchrl, zhang et al. (2020) restricted the high-level action space to a k-step adjacent region of the current state to achieve better sample efficiency. r¨oder et al. (2020) aimed at improving high-level exploration via curiosity-driven intrinsic rewards. however, those methods require oracle subgoal spaces designed with prior knowledge. in contrast, hess learns subgoal representations online. li et al. (2021) proposed an hrl approach, lesson, with online subgoal representation learning, and provided theoretical analysis that the learned representation could support efficient exploration. however, without intrinsic rewards at the high level, lesson could hardly solve long-horizon tasks with extremely sparse rewards. this paper improves exploration from an orthogonal perspective from lesson, and develops an active exploration strategy for high-level policy learning. LINEBREAK we propose two measures for subgoals: novelty and potential. the cumulative count in the novelty measure is related to successor features (kulkarni et al., 2016b). previous exploration methods use successor features to propagate the uncertainty of value function (janz et al., 2019) or whether the features occur (machado et al., 2020). in contrast, our method propagates an explicit estimation of the number of feature occurrence through trajectories, which is conducted without function approximation. the potential measure of reachability shares some similarities with empowerment (salge et al., 2014), which is defined as the agent’s control over its environment. by maximizing empowerment (campos et al., 2020; gregor et al., 2016; mohamed & rezende, 2015), an agent could reach more diverse states. unlike empowerment measured by mutual information, the potential measure is formalized with the distances between desired subgoals and achieved latent states. in appendix c, we discuss how hess is related to goal selection strategies in the multi-goal rl domain (schaul et al., 2015). LINEBREAK experiments LINEBREAK we evaluate hess on a set of challenging sparse-reward environments that require a combination of locomotion and object manipulation skills, aiming at answering the following questions: (1) can hess outperform state-of-the-art exploration strategies in sample efficiency and overall performance? (2) can hess learn a stable subgoal representation space efficiently? (3) how important are the various components of the hess agent? (4) what do the subgoals selected by hess look like? (5) how much would the choice of hyper-parameters influence the experimental performance? for better sample efficiency, we utilize an off-policy algorithm, soft actor-critic (sac) (haarnoja et al., 2018), to learn both-level policies. to mitigate the influence of the entropy term of sac on hess, we use sac with automatic entropy tuning. LINEBREAK environment setup LINEBREAK we evaluate on a suite of mujoco (todorov et al., 2012) tasks that are widely used in the hrl community, including ant (or point) maze, ant push, ant fourrooms, cheetah hurdle, cheetah ascending, and two variants with low-resolution image observations. the experiments with image input are labeled ‘images’. to demonstrate the exploration ability of hess, we adapt those tasks to make them more challenging. different from the settings of random start or random goal with dense external rewards (nachum et al., 2018; 2019a; li et al., 2021; zhang et al., 2020), in the tasks used in this work, a simulated robot starts from a fixed position and needs to reach a faraway target position LINEBREAK figure 4: learning curves of the proposed method and baselines on all the tasks. the y axis shows the average success rate in 10 episodes. each line is the mean of 5 runs with shaded regions corresponding to a confidence interval of 95%. all the curves have been smoothed equally for visual clarity. code is available at https://github.com/siyuanlee/hess. with sparse external rewards. furthermore, there is no predefined subgoal space provided. more details about the environments and implementation are available in appendix a and b, respectively. LINEBREAK comparative analysis LINEBREAK we conduct experiments comparing the proposed hierarchical exploration approach to the state-ofthe-art baselines: (1) h-icm (pathak et al., 2017): prediction error in the latent subgoal space as intrinsic rewards to the high level. (2) h-sr (machado et al., 2020): count-based exploration bonus in the high level, and the counts are estimated by the norm of the successor representation. (3) dads (sharma et al., 2019): an unsupervised skill discovery method via predicting dynamics. (4) lesson (li et al., 2021): an gchrl method learning the subgoal representation online using triplet loss, and no intrinsic rewards in the high level. (5) sac (haarnoja et al., 2018): the non-hierarchical base rl algorithm used in this work. LINEBREAK as shown in figure 4, the proposed method substantially outperforms the baselines in all the tasks. the outperformance in the ant fourrooms task is more significant since the maze scale of this task is larger, and the exploration problem is harder. both h-icm and h-sr underperform our method. the dynamic model of the simulated physics engine is relatively easy to fit by neural networks, so the intrinsic rewards of h-icm may vanish before the policies have been learned well. the hsr method estimates visit counts using the (cid:96)1-norm of the successor representation, which is the expected discounted future state occupancies in trajectories. this idea shares some similarities with the cumulative counts in the novelty measure. nevertheless, the successor representation estimates the expected future state occupancy starting from a given state (dayan, 1993), but not the visitation number of the given state, which is less helpful to promote exploration. LINEBREAK when predicting dynamics in the whole observation space, the intrinsic rewards of dads could hardly help the agent learn gait skills that make the robots move. the original paper (sharma et al., 2019) has also demonstrated that without x, y prior, the trajectories generated by a primitive skill have a large variance. we provide a visualization of those skills in appendix d. as a result, the success rates of dads on all the tasks are zero. hence, we omit them from figure 4. lesson uses no intrinsic reward to promote exploration in the high level. even with a good subgoal representation space, it still could not solve those tasks with sparse rewards. the non-hierarchical method sac performs poorly in all the tasks, demonstrating the strength of the hierarchical structures in solving long-horizon tasks with sparse rewards. LINEBREAK visualization of representation learning LINEBREAK we visualize state embeddings of 5 trajectories from the beginning to the end of the ⊃-shape corridor in the ant maze (images) task in figure 6, comparing the representations learned by hess and hess without stability regularization. those representations are all learned from top-down image observations. LINEBREAK hess is able to learn an effective subgoal representation at an early stage. by optimizing the contrastive objective, the euclidean distances in the latent space approximately cor LINEBREAK figure 5: ant maze LINEBREAK figure 6: subgoal representation learning process in the ant maze (images) task. each subfigure contains 2d state embeddings of 5 trajectories in the ⊃-shape maze (red for the start, blue for the end). for larger axis labels, see videos of the representation learning process at https://sites. google.com/view/hess-iclr. LINEBREAK respond to the number of transitions between states in local areas, which provides subgoal-reaching rewards for low-level policies. but globally, the distance could not represent the number of transitions. for example, it requires lots of transitions to move from the start to the goal in the ⊃-shaped maze, but they have not been pushed much away in the latent space, since there is no explicit constraint in the triplet loss for the distance between features more than c timesteps apart. comparing figure 6(a) to figure 6(b), we find that with the stability regularization, when the features have already fitted the learning objective well, their changes are minor afterward. there is almost no change for the red features from 0.25m steps to the end in figure 6(a). LINEBREAK without the stability regularization, all the features are changing dramatically. the non-stationary issue makes high-level policy learning very hard. in figure 6(b), at 0.25m steps (the second subfigure), the high-level agent at the red start should select a subgoal to move upwards. however, at 0.65m steps (the fourth subfigure), selecting a subgoal in the right of the start state is better. LINEBREAK ablative analysis of various components LINEBREAK to understand the importance of various design choices of hess, we conduct ablation studies on the subgoal representation learning and the proposed exploration strategy separately. in the ablation studies of representation learning, we compare hess, hess without stability regularization, and hess without both stability regularization and prioritized sampling, shown by the curves without markers in figure 7. in the tasks with image input, the advantage of stability regularization and prioritized sampling is more substantial, since the representation of image observations is harder to learn. in the tasks with vector input, the subgoal representation has a good generalization ability, so the effect of stability regularization and prioritized sampling becomes less significant. LINEBREAK figure 7: ablation studies of representation learning and the exploration strategy. in the ablation studies of the exploration strategy, we compare hess, hess without the potential measure, replacing cumulative counts n (φ(s)) with immediate counts n(φ(s)), and reactive exploration with high-level intrinsic rewards ri(s) = η1/(cid:112)n (φ(s)) + η2/(cid:112)−u (φ(s)), shown by the curves with markers in figure 7. the potential measure is extremely important, since without it, the subgoals are concentrated on the areas with low counts, but not extendable, like the areas near the corners in figure 8(b). therefore, the success rates without the potential measure are much lower. LINEBREAK with immediate counts n(φ(s)) as the novelty measure, the agent loses a long-term vision to seek out novel subgoals, and only focuses on novel states in its neighborhood. hence, the states near the walls are selected as subgoals more frequently in figure 8(c) than in figure 8(a), but those subgoals are still better for exploration than the subgoals selected without the potential measure. in the ablation study of active exploration, we can see that by actively selecting favorable subgoals, hess has achieved better sample efficiency than the reactive exploration method with intrinsic LINEBREAK figure 8: visualization of the selected subgoals during 10 episodes and their corresponding positions in the x, y space in the ant maze task at 0.25 million timesteps. LINEBREAK rewards, as the reactive method could not immediately seek the novel and promising subgoals even when it has detected them. LINEBREAK ablation studies on hyper-parameter selection LINEBREAK in this section, we set up a set of ablation tests on several hyper-parameters of hess in the ant push (images) task. from figure 9, we can see that most parameters work in a large range. furthermore, we use a single suite of parameters in all the tasks in this paper, which also indicates that the proposed approach is robust to hyper-parameter selection. LINEBREAK figure 9: ablation studies of hyper-parameter selection in the ant push (images) task. LINEBREAK scaling factor α balances novelty and potential. smaller α encourages hess to choose more novel state embeddings as subgoals. results in figure 9 (a) indicate that hess is robust against α, since the proposed method could work well, when the potential measure mitigates the negative influence of novelty inaccuracy on exploration. for all tasks in section 5.2, we set α = 0.03. LINEBREAK extended distance de is the distance between the subgoal selected from the replay buffer and the imagined subgoal ge. to keep ge still near the current latent state, de should be no larger than the radius rg of the neighborhood for selecting subgoals. the performance of hess is reasonable as long as de is not too large, as an excessively large de may cause the subgoal ge pursued by the low level too far away from the current latent state. for all tasks in section 5.2, we set de = 5.0. LINEBREAK stable ratio k%: the stability regularization constrains the changes of the embeddings for k% of the states with the minimum triplet loss in the buffer. when k% is too large, the representation learning efficiency may be slightly affected, which is harmful to the hierarchical policy learning, as indicated in figure 9 (c). for all tasks in section 5.2, we set k% = 0.3, i.e., the regularization is applied to 30% of the data in buffer. LINEBREAK low-level policy length c is an important and common hyper-parameter in hrl. with a larger c, the burden of high-level decision-making is lighter, but low-level policy learning becomes harder. the learning performance is better when c is 20 or 50 with an episode length of 500. among all the hyper-parameters, c seems to influence the performance most. for all tasks and baselines in section 5.2, we set c = 50. LINEBREAK conclusion LINEBREAK to solve long-horizon sparse-reward tasks with gchrl, we design novelty and potential measures for subgoals upon stable subgoal representation learning, and develop a hierarchical exploration strategy that actively seeks out new promising subgoals and states. as the dimension of the subgoal space in this work is low, we employ a naive count estimation method in the representation space. when the dimension of the subgoal space is higher, it would be better to learn a density model (papamakarios et al., 2019), or utilize more advanced count estimation methods, such as kernelbased methods (davis et al., 2011). for the future work, we believe that the idea of stability is general and could be used beyond hrl. for example, maybe many existing techniques in non-deep rl could be applied to representations learned with stability as well. LINEBREAK acknowledgement LINEBREAK this work is supported by science and technology innovation 2030 – “new generation artificial intelligence” major project (no. 2018aaa0100904) and national natural science foundation of china (no. 20211300509). LINEBREAK reproducibility statement LINEBREAK for reproducibility, we include an anonymous downloadable source code in the supplementary material. beyond that, we describe the details about the environments and the implementation in appendix a and b. LINEBREAK references LINEBREAK marc g bellemare, sriram srinivasan, georg ostrovski, tom schaul, david saxton, and arxiv preprint LINEBREAK remi munos. unifying count-based exploration and intrinsic motivation. arxiv:1606.01868, 2016. LINEBREAK yuri burda, harri edwards, deepak pathak, amos storkey, trevor darrell, and alexei a efros. LINEBREAK large-scale study of curiosity-driven learning. arxiv preprint arxiv:1808.04355, 2018. LINEBREAK v´ıctor campos, alexander trott, caiming xiong, richard socher, xavier giro-i nieto, and jordi torres. explore, discover and learn: unsupervised discovery of state-covering skills. in international conference on machine learning, pp. 1317–1327. pmlr, 2020. LINEBREAK john co-reyes, yuxuan liu, abhishek gupta, benjamin eysenbach, pieter abbeel, and sergey levine. self-consistent trajectory autoencoder: hierarchical reinforcement learning with trajecin international conference on machine learning, pp. 1009–1018. pmlr, tory embeddings. 2018. LINEBREAK richard a davis, keh-shin lii, and dimitris n politis. remarks on some nonparametric estimates LINEBREAK of a density function. in selected works of murray rosenblatt, pp. 95–100. springer, 2011. LINEBREAK peter dayan. improving generalization for temporal difference learning: the successor representa LINEBREAK peter dayan and geoffrey e hinton. feudal reinforcement learning. in advances in neural infor LINEBREAK mation processing systems, pp. 271–278, 1993. LINEBREAK nat dilokthanakul, christos kaplanis, nick pawlowski, and murray shanahan. feature control as ieee transactions on neural net LINEBREAK intrinsic motivation for hierarchical reinforcement learning. works and learning systems, 30(11):3409–3418, 2019. LINEBREAK zach dwiel, madhavun candadai, mariano phielipp, and arjun k bansal. hierarchical policy LINEBREAK learning is sensitive to goal space design. arxiv preprint arxiv:1905.01537, 2019. LINEBREAK benjamin eysenbach, abhishek gupta, julian ibarz, and sergey levine. diversity is all you need: LINEBREAK learning skills without a reward function. arxiv preprint arxiv:1802.06070, 2018. LINEBREAK carlos florensa, david held, xinyang geng, and pieter abbeel. automatic goal generation for reinforcement learning agents. in international conference on machine learning, pp. 1515–1528. pmlr, 2018. LINEBREAK meire fortunato, melissa tan, ryan faulkner, steven hansen, adri`a puigdom`enech badia, gavin buttimore, charlie deck, joel z leibo, and charles blundell. generalization of reinforcement learners with working and episodic memory. arxiv preprint arxiv:1910.13406, 2019. LINEBREAK dibya ghosh, abhishek gupta, and sergey levine. learning actionable representations with goal LINEBREAK conditioned policies. arxiv preprint arxiv:1811.07819, 2018. LINEBREAK ian goodfellow, yoshua bengio, and aaron courville. deep learning. mit press, 2016. LINEBREAK karol gregor, danilo jimenez rezende, and daan wierstra. variational intrinsic control. arxiv LINEBREAK tuomas haarnoja, aurick zhou, kristian hartikainen, george tucker, sehoon ha, jie tan, vikash kumar, henry zhu, abhishek gupta, pieter abbeel, et al. soft actor-critic algorithms and applications. arxiv preprint arxiv:1812.05905, 2018. LINEBREAK geoffrey e hinton. to recognize shapes, first learn to generate images. progress in brain research, LINEBREAK david janz, jiri hron, przemysław mazur, katja hofmann, jos´e miguel hern´andez-lobato, and sebastian tschiatschek. successor uncertainties: exploration and uncertainty in temporal difference learning. advances in neural information processing systems, 32:4507–4516, 2019.
| 10
|
[
108,
573.8250784,
504.0037874,
605.7556784
] |
f2OYVDyfIB.pdf
| 2,022
| 2
|
LINEBREAK scale efficiently: insights from pre-training and fine-tuning transformers LINEBREAK yi tay∗, mostafa dehghani∗, jinfeng rao, william fedus, samira abnar, hyung won chung, sharan narang, dani yogatama†, ashish vaswani, donald metzler google research & deepmind† {yitay,dehghani}@google.com LINEBREAK abstract LINEBREAK there remain many open questions pertaining to the scaling behaviour of transformer architectures. these scaling decisions and findings can be critical, as training runs often come with an associated computational cost which have both financial and/or environmental impact. the goal of this paper is to present scaling insights from pretraining and finetuning transformers. while kaplan et al. (2020) presents a comprehensive study of the scaling behaviour of transformer language models, the scope is only on the upstream (pretraining) loss. therefore, it is still unclear if these set of findings transfer to downstream task within the context of the pretrain-finetune paradigm. the key findings of this paper are as follows: (1) we show that aside from only the model size, model shape matters for downstream fine-tuning, (2) scaling protocols operate differently at different compute regions, (3) widely adopted t5-base and t5-large sizes are pareto-inefficient. to this end, we present improved scaling protocols whereby our redesigned models achieve similar downstream fine-tuning quality while having 50% fewer parameters and training 40% faster compared to the widely adopted t5-base model. we publicly release over 100 pretrained checkpoints of different t5 configurations to facilitate future research and analysis. LINEBREAK introduction LINEBREAK training transformers incurs both financial and environmental costs (schwartz et al., 2019; patterson et al., 2021). to this end, researchers and practitioners often have to work around fixed compute budgets and figure out the best ways to train their models. in lieu of the rising computation demand for training state-of-the-art transformer (vaswani et al., 2017; devlin et al., 2018; raffel et al., 2019; brown et al., 2020; fedus et al., 2021) models, the goal of this paper is to present insights and lessons from scaling transformers and making them efficient and effective for transfer learning on downstream tasks. LINEBREAK despite the insights offered in scaling laws research (kaplan et al., 2020; hernandez et al., 2021) there remain unresolved questions: should one follow fixed scaling ratios? if not, should one scale by depth? or by width? will scaling experiments on upstream pre-training generalize for downstream transfer? do scaling protocols for small models generalize to larger models? are scaling behaviours similar in all compute regions? we hope the insights presented in this paper can be useful to both practitioners and researchers in informing their scaling decisions. LINEBREAK neural scaling laws (kaplan et al., 2020) is a common resource that many look to for advice on scaling transformer architectures. however, this paper limited its scope to an exhaustive study of upstream cross entropy on language modeling tasks. it is furthermore unclear if findings from (kaplan et al., 2020) will transfer to downstream applications. specifically, kaplan et al. (2020) proposed that the performance of a transformer language model strongly depends on model size and only weakly on its shape. they also argue that many model configurations with the same number of parameters perform similarly regardless of architectural details. our work empirically confirms this on upstream training but finds a distinct discrepancy when considering practical downstream performance – a key insight that we believe is highly important. LINEBREAK ∗equal contribution LINEBREAK to this end, we conduct extensive experiments involving pre-training and fine-tuning over 200 transformer configurations ranging from 5m to 30b parameters. to the best of our knowledge, this is the largest empirical study of practical scaling of transformer to date that considers both upstream and practical downstream transfer. while there have been many proposed scaling protocols for convnets (tan and le, 2019; bello et al., 2021), there is still limited advice on scaling of transformer architectures, apart from (kaplan et al., 2020; li et al., 2020). hence, the key goal of this paper is to distill our experiences and insights with scaling transformer architectures and share them with the broader community. LINEBREAK contributions the overall findings and insights of the paper can be summarized as follows: LINEBREAK • we find that scaling laws may differ in upstream and downstream setups. specifically, contrary to kaplan et al. (2020), we find that downstream performance strongly depends on shape and not only on model size. hence, pretraining performance may not necessarily transfer to downstream applications. (figure 1). LINEBREAK • our findings show that pre-training perplexity can often be a deceiving indicator of downstream quality and therefore model building based on upstream perplexity can be challenging. scaling laws can differ substantially when considering metrics on actual downstream finetuning. (figure 1) LINEBREAK • given that empirical scaling laws differ when considering quality on the downstream, our work investigates the pareto-frontier of transformer configurations in this setup. we find that the canonical model configurations such as t5-base and t5-large sizes (raffel et al., 2019) are relatively inefficient (figure 2). note that these sizes are based off the canonical bert (devlin et al., 2018) base and large sizes. LINEBREAK • we find that scaling strategies differ at different compute regions, i.e., applying same strategies at different compute regions (small vs large) has a different effect on model quality. this has practical implications since finding strategies at small scale might not necessarily transfer or generalize to higher compute regions (section 4.2). LINEBREAK • after extensive empirical exploration of the pareto-frontier of transformer models, we propose a simple but effective scaling strategy which we call the deepnarrow strategy. we show that we are able to obtain model quality on par or better than canonical model sizes (e.g., base) with 50% less parameters and being 40% faster. while we highlight the limitations of this strategy, we also show that this deepnarrow strategy is applicable to all model sizes. (table 4). LINEBREAK • to consider how generalized these scaling strategies are, we conduct additional experiments on vision transformers (vit; dosovitskiy et al., 2020) to verify them in the vision domain. moreover, on top of the 17 glue (wang et al., 2018) / superglue (wang et al., 2019) and squad (rajpurkar et al., 2016) tasks we employed in our extensive study, we verify our findings via additional downstream experiments across 12 diverse language tasks (section .2). LINEBREAK • we release (1) the pre-trained checkpoints for our t5 models with improved scaling protocols and (2) all 100+ model checkpoints, including intermediate training checkpoints to the research community. we believe that this is a treasure trove of data to study the behaviour of large lm pretraining and finetuning especially pertaining to scaling laws. the checkpoints and code will be released at https://github.com/google-research/ google-research/tree/master/scaling_transformers. the checkpoints are now publicly available at our google cloud bucket gs://scenic-bucket/ scaling_explorer/scaling_explorer. more recently, these checkpoints are also now available on huggingface https://huggingface.co/models?other= deep-narrow. LINEBREAK related work LINEBREAK transformers (vaswani et al., 2017) have become ubiquitous in the modern deep learning stack and have seen widespread impact across not only language (devlin et al., 2018; raffel et al., 2019; brown et al., 2020) but also computer vision (dosovitskiy et al., 2020; arnab et al., 2021), reinforcement LINEBREAK table 1: table of model configurations. nl is the number of layers, df f is the size of the mlp, dmodel is the hidden size of the model. dkv is the size of each key-value vector. nh is the number of heads. p is the default model parallelism. LINEBREAK table 2: description of different knobs used in the paper to define scaling operations. LINEBREAK model LINEBREAK tiny mini small base large xl xxl xxxl LINEBREAK nl LINEBREAK df f LINEBREAK dmodel LINEBREAK dkv nh LINEBREAK #params LINEBREAK scaling op description LINEBREAK nl el dl dm kv nh ff sh skv LINEBREAK num. layers num enc. layers num. dec. layers dmodel dkv num. of heads df f shared heads tied key-values LINEBREAK learning (parisotto et al., 2020) and computational biology (senior et al., 2020). to this end, discovering empirical scaling laws of these models is a research area that has garnered considerable interest (kaplan et al., 2020; henighan et al., 2020; hernandez et al., 2021; bahri et al., 2021). LINEBREAK discovering empirical scaling laws that govern neural language model scaling has been a recent subject of keen interest (kaplan et al., 2020; hernandez et al., 2021; bahri et al., 2021). many of these works present scaling laws across a variety of axis such as model size, compute and/or dataset size. it is worth to note that many of these works evaluate on autoregressive language modeling and use cross entropy loss to measure performance (kaplan et al., 2020; hernandez et al., 2021). there are a multitude of interesting findings presented (kaplan et al., 2020) whereby the authors show that performance (loss) scales as a power-law with model size and dataset size. however, one notable claim is that architectural details (e.g., network depth and width) have minimal effects. subsequently, hernandez et al. (2021) builds upon the work of kaplan et al. (2020), evaluating scaling laws for ‘transfer’. to this end, the authors study the effect of dataset scaling on unsupervised transfer learning and finetuning. that said, the experiments of hernandez et al. (2021) are mainly targeted at dataset transfer between two different distributions (language and code) and make the same assumptions as kaplan et al. (2020) about model scaling. in a similar vein, henighan et al. (2020) studied empirical scaling laws for different domains for generative modeling in vision, video and multimodal setups. LINEBREAK there have been increasing demand for training and scaling transformers (shoeybi et al., 2019; raffel et al., 2019; fedus et al., 2021; conneau et al., 2019; lin et al., 2021). despite the benefits from improved performance, there are financial considerations and environmental costs (schwartz et al., 2019; patterson et al., 2021) to training these models. given that every moment spent on hardware accelerators is a cost incurring activity, we believe that research in distilling practical scaling insights and recommendations to be highly crucial (li et al., 2020; kaplan et al., 2020; bello et al., 2021). LINEBREAK notably, the research problem of making transformers efficient have also been tackled from an extensive number of angles such as (but not limited to) distillation (hinton et al., 2015), compression (zafrir et al., 2019), parameter sharing (lan et al., 2019; tay et al., 2019; zhang et al., 2021), efficient attention (tay et al., 2020c; kitaev et al., 2020; choromanski et al., 2020; tay et al., 2020b; ainslie et al., 2020; jaegle et al., 2021), architecture search (so et al., 2019), alternative non transformer-based architectures (tolstikhin et al., 2021; tay et al., 2021a; 2020a; lee-thorp et al., 2021). with so much extensive research into novel techniques to improving the efficiency of transformers, it is surprising that the standard configurations (e.g., base, large) of transformers in bert (devlin et al., 2018) or t5 (raffel et al., 2019) have not been rethought. LINEBREAK methods LINEBREAK this section describes our main experimental setup. LINEBREAK architecture we study a transformer encoder-decoder architecture that uses relative attention based of the t5 architecture (raffel et al., 2019). the choice of adopting seq2seq architec LINEBREAK tures (sutskever et al., 2014) is mainly due to their universality and ability to both subsume encoder (bert-like) and decoder (language) models within an identical framework. moreover, the universality of seq2seq architectures also allow us to fine-tune across a broad range of tasks. our implementation and experiments are performed in mesh tensorflow1 (shazeer et al., 2018) using the default t5 library2. LINEBREAK model configurations we first define eight transformer sizes, i.e., tiny, mini, small, base, large, xl, xxl and xxxl. the small, base, large, xl and xxl corresponds to the canonical t5 sizes that are released in raffel et al. (2019). we use three other sizes as starting points, e.g., tiny and mini since there is a lack of representation of transformers at lower compute regions. LINEBREAK pretraining we pretrain on the colossal cleaned common crawl corpus (c4; raffel et al., 2019). we pre-train encoder-decoder models using the span-based masked language modeling (mlm) objective (fedus et al., 2018; devlin et al., 2018). we pretrain all our models for 219 steps using 16 tpu-v3 chips. for larger models, we run our models with 64 tpu-v3 chips. we use 219 steps since majority of the experiments in (raffel et al., 2019) were conducted in the same fashion. we would also like to emphasize that the official released t5 checkpoints were pretrained on 1t tokens (1 million steps with a batch size of 2048). given that this extended long pretraining setup is infeasible given the number of experiments we would have to run, we opt to follow the standard ablation setup in (raffel et al., 2019) which pretrains on more manageable number of tokens. LINEBREAK downstream tasks we consider a myriad of downstream tasks. in total, we consider 17 tasks. we finetune on a mixture of glue (wang et al., 2018), superglue (wang et al., 2019), squad (rajpurkar et al., 2016) for the key downstream experiment results and report aggregate glue/superglue scores. we believe that an aggregate of 17 tasks in natural language understanding that conmprises of both high-resource and low-resource tasks gives us a good overview of a model’s downstream performance. finetuning is typically performed with 16 tpu-v3 chips. LINEBREAK notation for scaling operators for the remainder of the paper, we use a shortform code for each scaling operator applied on a standard transformer size. for example, nl32-sm refers to scaling small (sm) transformers to 32 layers (nl32). we use el,dl to represent scaling encoder and decoder layers independently, kv to represent scaling each key-value size, dm to represent scaling dmodel. nh to represent modifying the number of heads and ff to represent scaling df f . the initial/standard model sizes are tiny, mini, small, base, large, xl, xxl and xxxl. this is described in table 2. LINEBREAK convention with the exception of figure 1, all charts are plotted with flops as the main compute metric. we use number of params for figure 1 to align with kaplan et al. (2020). all of the downstream results are plot with superglue accuracy (wang et al., 2019) as the y-axis. due to the lack of space, we report charts/plots of other metrics (params of speed) and other tasks (glue or squad) in the supplementary material. all parameter counts also include the embedding parameters. we re-emphasize that it is critical to take into account multiple facets of efficiency and therefore report all three key metrics (flops, throughput/speed and parameter count) in the supplementary material. LINEBREAK model and data parallelism by default, our models are trained across multiple workers via data parallelism. as per convention in the t5 paper, our larger models use the default model parallelism. specifically, this is set to 2 for large models, 8 for xl models and 32 for xxl models. LINEBREAK analysis and results LINEBREAK this section presents our overall findings and key results. LINEBREAK 1https://github.com/tensorflow/mesh 2https://github.com/google-research/text-to-text-transfer-transformer LINEBREAK (a) pre-training scaling LINEBREAK (b) fine-tuning scaling LINEBREAK figure 1: the predictability and unpredictability of pre-training versus fine-tuning. while the upstream pre-training performance measured by negative log-perplexity scales with model size quite independently from the model shape, the downstream performance (superglue (avg) score) does not. this indicates that the shape of models plays an important role on how it performs on the target task and the performance is not merely a function of parameter size. LINEBREAK model shape matters LINEBREAK we extend the results of kaplan et al. (2020) to fine-tuning and present model shape dependence not highlighted in hernandez et al. (2021). kaplan et al. (2020) studies pre-training (upstream) and concludes that performance depends only weakly on model shape, but strongly on model size. hernandez et al. (2021) extends this work to measure an effective data transfer measure when pretraining and then fine-tuning on a python dataset. however, this work does not consider details of model shape, and instead focused on the relative predictability with model scale alone. our work stands in contrasts since we find that model shape matters considerably for downstream fine-tuned performance. LINEBREAK figure 1 shows compute-performance scatter plots for pre-training (left) and fine-tuning (right) over a dozen transformers. the models considered are sampled diversely within a two-order of magnitude band of model scales. we adjust the model shape primarily through depth variations, starting with configurations such as xxxl (33b), xxl (11b), xl (3b) and lg (750m) parameters but have their depths/lengths modified. from figure 1 reveals a strong correlation of the upstream performance with model size, corroborating the neural scaling laws of kaplan et al. (2020). but the strong pre-training correlation largely vanishes when fine-tuning these models on superglue (wang et al., 2019). while we confirm the findings of kaplan et al. (2020) that performance scales strongly dependent on model size but weakly on model shape, we find that model shape (such as depth-scaling) is highly important for downstream transfer – a dimension that is not considered in kaplan et al. (2020). LINEBREAK as a substantiating point and additional context to figure 1, we also show via a counter-example that pretraining perplexity is not indicative of transfer performance, i.e., we explicitly show that a case (in table 3) where a model can have outstanding pre-training perplexity but substantially undeliver when it comes to downstream performance. to the best of our knowledge, while this has been mentioned implicitly in several existing works (narang et al., 2021), this is the first work explicitly shows this point. LINEBREAK zooming in versus zooming out here, one may argue that a general trend (even on downstream) may still exist if we zoom out and cover a very wide range of model sizes (e.g., very tiny to very large). a tiny model is not likely to outperform a very large model no matter how well-configured it might be. our purpose here is not to contradict this general trend but to distinguish between both arguments. we argue that, in practical setups, comparisons between models and scaling decisions are often made when zooming-in and our pairwise comparisons above are not on largely different LINEBREAK table 3: upstream performance does not guarantee downstream performance. example points from figure 1. a model with improved upstream quality (as measured by validation perplexity) can do significantly worse on transfer if the shape setting is not right. hence, pre-training perplexity can be misleading. LINEBREAK name LINEBREAK nl LINEBREAK df f LINEBREAK dmodel LINEBREAK dkv nh LINEBREAK #params LINEBREAK ppl LINEBREAK downstream LINEBREAK (a) t5-small model LINEBREAK (b) t5-base model LINEBREAK (c) t5-large model LINEBREAK figure 2: downstream scaling properties is scale-dependent. the downstream performance on superglue has qualitatively different scaling properties across models sizes. from left to right, we fine-tune model configurations closely matched to t5-small, t5-base and t5-large. LINEBREAK models, rather those that are on the same neighborhood in the size (close in the x-axis). thus, what we claim is that when you zoom in, which is what happen in practice, it is not uncommon to see cases similar to the models in table 3 where taking the upstream perplexity into account may lead to a sub-optimal choice. it is also worth to mention that zoom-ing in on upstream returns very different trends compared to zoom-ing in on downstream results. LINEBREAK scaling behaviour at different compute regions is different
| 5
|
[
108.249,
313.9590784,
430.5163577,
323.9216784
] |
OjPmfr9GkVv.pdf
| 2,022
| 1
|
LINEBREAK enhancing cross-lingual transfer by manifold mixup LINEBREAK huiyun yang1, huadong chen1, hao zhou∗ 1, lei li2 1bytedance ai lab 2university of california, santa barbara {yanghuiyun.11,chenhuadong.howard, zhouhao.nlp}@bytedance.com [email protected] LINEBREAK abstract LINEBREAK based on large-scale pre-trained multilingual representations, recent cross-lingual transfer methods have achieved impressive transfer performances. however, the performance of target languages still lags far behind the source language. in this paper, our analyses indicate such a performance gap is strongly associated with the cross-lingual representation discrepancy. to achieve better cross-lingual transfer performance, we propose the cross-lingual manifold mixup (x-mixup) method, which adaptively calibrates the representation discrepancy and gives compromised representations for target languages. experiments on the xtreme benchmark show x-mixup achieves 1.8% performance gains on multiple text understanding tasks, compared with strong baselines, and reduces the cross-lingual representation discrepancy significantly. LINEBREAK introduction LINEBREAK many natural language processing tasks have shown exciting progress utilizing deep neural models. however, these deep models often heavily rely on sufficient annotation data, which is not the case in the multilingual setting. the fact is that most of the annotation data are collected for popular languages like english and spanish (ponti et al., 2019; joshi et al., 2020), while many long-tail languages could hardly obtain enough annotations for supervised training. as a result, cross-lingual transfer learning (prettenhofer & stein, 2011; wan et al., 2011; ruder et al., 2019) is crucial, transferring knowledge from the annotation-rich source language to low-resource or zero-resource target languages. in this paper, we focus on the zero-resource setting, where labeled data are only available in the source language. LINEBREAK recently, multilingual pre-trained models (conneau & lample, 2019; conneau et al., 2020a; xue et al., 2020) offer an effective way for cross-lingual transfer, which yield a universal embedding space across various languages. such universal representations make it possible to transfer knowledge from the source language to target languages through the embedding space, significantly improving the transfer learning performance (chen et al., 2019; zhou et al., 2019; keung et al., 2019; fang et al., 2020). besides, conneau et al. (2018) proposes translate-train, a simple yet effective crosslingual data augmentation method, which constructs pseudo-training data for each target language via machine translation. although these works have achieved impressive improvements in cross-lingual transfer (hu et al., 2020; ruder et al., 2021), significant performance gaps between the source language and target languages still remain (see table 1). hu et al. (2020) refers to the gap as the cross-lingual transfer gap, a difference between the performance on the source and target languages. LINEBREAK to investigate how the cross-lingual transfer gap emerges, we perform relevant analyses, demonstrating that transfer performance correlates well with the cross-lingual representation discrepancy (see section 3 for details). here the cross-lingual representation discrepancy means the degree of difference between the source and target language representations in the universal embedding space. as shown in figure 1(a), in translate-train, the representation distribution of spanish almost overlaps LINEBREAK ∗corresponding author. code is available at https://github.com/yhy1117/x-mixup. LINEBREAK (a) translate-train LINEBREAK (b) x-mixup LINEBREAK figure 1: representation visualization of four languages: english (en), spanish (es), arabic (ar) and swahili (sw) based on xlm-r. we plot the sentence representation of the xnli test set, which is parallel across 15 languages. we average hidden states of the last layer to get sentence representations and implement the dimensionality reduction by pca. obviously, the cross-lingual representation discrepancies are large in translate-train, but x-mixup reduces the discrepancy significantly. LINEBREAK with english, while arabic shows a certain representation discrepancy compared with english and swahili performs larger discrepancy, where translate-train achieves 88.6 accuracy on english, 85.7 on spanish, 82.2 on arabic and 77.0 on swahili. intuitively, a larger representation discrepancy could lead to a worse cross-lingual transfer performance. LINEBREAK in this paper, we propose the cross-lingual manifold mixup (x-mixup) approach to fill the crosslingual transfer gap. based on our analyses, reducing the cross-lingual representation discrepancy is a promising way to narrow the transfer gap. given the cross-lingual representation discrepancy is hard to remove, x-mixup directly faces the issue and explicitly accommodates the representation discrepancy in the neural networks, by mixing the representation of the source and target languages during training and inference. with x-mixup, the model itself can learn how to escape the discrepancy, which adaptively calibrates the representation discrepancy and gives compromised representations for target languages to achieve better cross-lingual transfer performance. x-mixup is motivated by robust deep learning (vincent et al., 2008), while x-mixup adopts the mixup (zhang et al., 2018) idea to handle the cross-lingual discrepancy. LINEBREAK specifically, x-mixup is designed upon the translate-train approach, faced with the exposure bias (ranzato et al., 2016) problem and data noise problem. during training, the source sequence is a real sentence and the target sequence is a translated one, while situations are opposite during inference. besides, the translated text often introduces some noises due to imperfect machine translation systems. to address them, we further impose the scheduled sampling (bengio et al., 2015) and mixup ratio in x-mixup to handle the distribution shift problem and data noise problem, respectively. LINEBREAK we verify x-mixup on the cross-lingual understanding benchmark xtreme (hu et al., 2020), which includes several understanding tasks and covers 40 languages from diverse language families. experimental results show x-mixup achieves 1.8% performance gains across different tasks and languages, comparing with strong baselines. it also reduces the cross-lingual representation discrepancy significantly, as figure 1(b) shows. LINEBREAK related work LINEBREAK multilingual representation learning recent studies have demonstrated the superiority of largescale pre-trained multilingual representations on downstream tasks. multilingual bert (mbert; devlin et al., 2019) is the first work to extend the monolingual pre-training to the multilingual setting. then, several extensions achieve better cross-lingual performances by introducing more monolingual or parallel data and new pre-training tasks, such as unicoder (huang et al., 2019), xlm-r (conneau et al., 2020a), alm (yang et al., 2020), mmte (siddhant et al., 2020), infoxlm (chi et al., 2020), hictl (wei et al., 2020), ernie-m (ouyang et al., 2020), mt5 (xue et al., 2020), nmt5 (kale LINEBREAK table 1: cross-lingual transfer performances of pos and ner tasks on languages with different data resources or different language families, where there are only labeled training data in english. the data resource refers to the resource of each language utilized in the pre-training process. for the language family, english belongs to the germanic languages, so we divide languages into two types: germanic one and others. results show high-resource languages outperform low-resource ones significantly and languages dissimilar to the source language tend to perform worse. LINEBREAK language type LINEBREAK source LINEBREAK language LINEBREAK mbert LINEBREAK xlm-r LINEBREAK pos ner pos ner LINEBREAK language resource LINEBREAK language family LINEBREAK et al., 2021), amber (hu et al., 2021) and veco (luo et al., 2021). they have been the standard backbones of current cross-lingual transfer methods. LINEBREAK cross-lingual transfer learning cross-lingual transfer learning (prettenhofer & stein, 2011; wan et al., 2011; ruder et al., 2019) aims to transfer knowledge learned from source languages to target languages. according to the type of transfer learning (pan & yang, 2010), previous crosslingual transfer methods can be divided into three categories: instance transfer, parameter transfer, and feature transfer. the cross-lingual transferability improves a lot when engaged with the instance transfer by translation (i.e. translate-train, translate-test) or other cross-lingual data augmentation methods (singh et al., 2019; bornea et al., 2020; qin et al., 2020; zheng et al., 2021). chen et al. (2019) and zhou et al. (2019) focus on the parameter transfer to learn a share-private model architecture. besides, other works implement the feature transfer to learn the language-invariant features by adversarial networks (keung et al., 2019; chen et al., 2019) or re-alignment (libovick´y et al., 2020; zhao et al., 2020). x-mixup utilizes both the instance transfer and feature transfer, which is based on the translate-train data augmentation approach and implements the feature transfer by cross-lingual manifold mixup. LINEBREAK mixup and its variants mixup (zhang et al., 2018) proposes to train models on the linear interpolation at both the input level and label level, which is effective to improve the model robustness and generalization. generally, the interpolated pair is selected randomly. manifold mixup (verma et al., 2019) performs the interpolation in the latent space by conducting the linear combinations of hidden states. previous mixup methods (chen et al., 2020; jindal et al., 2020) focus on the monolingual setting. however, x-mixup focuses on the cross-lingual setting and faces many new challenges (see section 4 for details). besides, in contrast to previous mixup methods, x-mixup mixes the parallel pairs, which share the same semantics across different languages. as a result, the choice of parallel pairs for interpolation can build a smart connection between the source and target languages. LINEBREAK analyses of the cross-lingual transfer performance LINEBREAK in this section1, we concentrate on the cross-lingual transfer performance and find it is strongly associated with the cross-lingual representation discrepancy. firstly, we observe the cross-lingual transfer performance on different target languages and propose an assumption. then we conduct qualitative and quantitative analyses to verify it. LINEBREAK although previous studies (hu et al., 2020; ruder et al., 2021) have shown impressive improvements on cross-lingual transfer, the cross-lingual transfer gap is still pretty large, more than 16 points in hu et al. (2020). furthermore, results in table 1 show the performance of low-resource languages and dissimilar languages fall far behind other languages in cross-lingual transfer tasks. LINEBREAK compared with english, the representations of other languages, especially low-resource languages, are not well-trained (lauscher et al., 2020; wu & dredze, 2020), because high-resource languages dominate the representation learning process, which results in the cross-lingual representation LINEBREAK 1in our analyses, we take english as the source language, and the dissimilar language is the language which LINEBREAK is dissimilar to english. LINEBREAK table 2: spearman’s rank correlation ρ between the cka score and cross-lingual transfer performance on two xtreme tasks, where † denotes training on the source language, and ‡ denotes the translatetrain approach. ∗ denotes the p-value is lower than 0.05. results indicate the correlation is solid. LINEBREAK discrepancy. besides, dissimilar languages often show differences in language characteristics (like vocabulary, word order), which also leads to the representation discrepancy. as a result, we assume that the cross-lingual transfer performance is closely related to the representation discrepancy between the source language and target languages. LINEBREAK following conneau et al. (2020b), we utilize the linear centered kernel alignment (cka; kornblith et al., 2019) score to indicate the cross-lingual representation discrepancy LINEBREAK cka(x, y ) = LINEBREAK where x and y are parallel sequences from the source and target languages, respectively. a higher cka score denotes a smaller cross-lingual representation discrepancy. LINEBREAK to verify our assumption, we perform qualitative and quantitative analyses on the relationship between the cka score and cross-lingual transfer performance. figure 3 in appendix b indicates a higher cka score tends to induce better cross-lingual transfer performance. we also calculate the spearman’s rank correlation between the cka score and the transfer performance in table 2, which shows a strong correlation between them. both the trend and correlation score confirm the crosslingual transfer performance is highly related to the cross-lingual representation discrepancy. LINEBREAK methodology: x-mixup LINEBREAK based on the aforementioned analyses, we believe that reducing the cross-lingual representation discrepancy is the key to filling the cross-lingual transfer gap. in this section, we propose x-mixup to explicitly reduce the representation discrepancy by implementing the manifold mixup between the source language and target language. with x-mixup, the model can adaptively calibrate the representation discrepancy and give compromised representations for target languages. this section will first introduce the overall architecture of x-mixup and its details. after that, the training objectives and inference process will be shown. LINEBREAK overall architecture LINEBREAK figure 2 illustrates the overall architecture of x-mixup. sequences from the source and target languages are first encoded separately. then within the encoder, x-mixup implements the manifold mixup between the paired sequences (original sequence and its translation) within a specific layer, where mixup ratio controls the degree of mixup and scheduled sampling schedules the data sampling process during training. notations we use s to denote the source language and t to denote the target language. hl denotes the hidden states of a sequence in layer l. d denotes the real text data collection and ˜d denotes the translation data collection. for downstream understanding tasks, there are annotation data in the source language dtrain t = (x test t = ( ˜x train ). similarly, through translate-test, we can get pseudo-test data in the source language t ˜dtest s = ( ˜x train s ) and translates train labels ( ˜y train ) from the official xtreme repository, which is in the same setting as baselines. LINEBREAK ) and raw test data in the target language dtest t ). through translate-train, we can get pseudo-training data in the target language ˜dtrain LINEBREAK s ). during training, the scheduled sampling process uses translation data2 ˜dtrain LINEBREAK ) from the source language. note that we use translation data ( ˜x train LINEBREAK , ˜y train t s = ( ˜x test LINEBREAK s = (x train LINEBREAK and ˜x test LINEBREAK , y train s LINEBREAK s LINEBREAK t LINEBREAK t LINEBREAK 2these data are acquired by forward translation (from s to t ) then backward translation (from t to s). LINEBREAK figure 2: the model architecture of x-mixup, where the cross-lingual manifold mixup process is in the green block. note that the manifold mixup process is implemented only in a certain layer (the same layer of both sides), and in other layers the process is omitted. LINEBREAK basic model we use mbert (devlin et al., 2019) or xlm-r (conneau et al., 2020a) as the backbone model. within each layer, there are two sub-layers: the multi-head attention layer and the feed-forward layer3, followed by the residual connection and layer norm. we use the same multi-head attention layer (see details in appendix a.1) as bert (devlin et al., 2019), where inputs are query, key, and value respectively. in layer l + 1, the hidden states of the source sequence xs and target sequence xt are acquired by the multi-head attention s), hl+1 hl+1 s = multihead(hl LINEBREAK t = multihead(hl LINEBREAK t , hl LINEBREAK t , hl LINEBREAK s, hl LINEBREAK s, hl LINEBREAK t ). LINEBREAK manifold mixup to reduce the cross-lingual representation discrepancy, a straightforward idea is to find compromised representations between the source and target languages. it’s difficult to find such representations because of varying degrees of differences across languages, like vocabulary and word order. however, manifold mixup provides an elegant way to get intermediate representations by conducting linear interpolation on hidden states. LINEBREAK to extract target-related information from the source hidden states, the target hidden states are used as the query, and the source hidden states are used as the key and value. this cross-attention process is computed as LINEBREAK t |s = multihead(hl+1 hl+1 LINEBREAK t , hl+1 LINEBREAK s , hl+1 LINEBREAK s ), LINEBREAK which shares parameters with the multi-head attention. the manifold mixup process mixes the target hidden states hl+1 LINEBREAK and the source-aware target hidden states hl+1 LINEBREAK t |s based on the mixup ratio λ LINEBREAK t LINEBREAK ˜hl+1 t = ln(λhl+1 LINEBREAK t |s + (1 − λ)hl+1 t ), LINEBREAK where λ is an instance-level parameter, ranging from 0 to 1, and indicates the degree of manifold mixup. ln denotes the layer norm operation. LINEBREAK mixup ratio the machine translation process may change the original semantics and introduce data noises in varying degrees (castilho et al., 2017; fomicheva et al., 2020). thus we introduce the LINEBREAK 3in this section, the feed-forward layer is omitted for simplification. LINEBREAK translation quality modeling in the mixup process to handle this problem. following fomicheva et al. (2020), we use the entropy of attention weights to measure the translation quality LINEBREAK h(a) = − LINEBREAK i LINEBREAK j LINEBREAK ajilogaji, where aij = softmax( LINEBREAK hti h(cid:62) sj√ n LINEBREAK i is the number of target tokens and j is the number of source tokens. lower entropy implies better cross-lingual alignment and higher translation quality. LINEBREAK to introduce the translation quality modeling into the manifold mixup process, we compute the mixup ratio as λ = λ0 · σ[(h(a) + h(a(cid:62)))w + b], where σ is the sigmoid function, and w , b are trainable parameters. λ0 is the max value of the mixup ratio, which is set to 0.5 in this paper. we consider two-way alignment in the translation quality modeling, i.e. h(a) and h(a(cid:62)). LINEBREAK scheduled sampling the source sequences utilized in training and inference are drawn from different distributions. during training, the source sequence is a real text from dtrain , while during inference, the source sequence is a translation from ˜dtest s . this discrepancy, commonly called the exposure bias, leads to a gap between training and inference. LINEBREAK s LINEBREAK motivated by the scheduled sampling approach (bengio et al., 2015) in nmt, we sample the source sequence dynamically during training. specifically, the source sequence fed into the manifold mixup is either a real text from dtrain LINEBREAK or translation from ˜dtrain LINEBREAK s with a certain probability p LINEBREAK s LINEBREAK p <= p∗, xs ∈ dtrain p > p∗, xs ∈ ˜dtrain where p∗ is decreasing during training to match the situation of inference. we utilize the inverse sigmoid decay (bengio et al., 2015), which decreases p∗ as a function of the index of mini-batch. LINEBREAK s LINEBREAK s LINEBREAK final training objective LINEBREAK the training loss is composed of two parts: the task loss and the consistency loss LINEBREAK + mse(rs, rt ) + kl(ps, pt ) (cid:123)(cid:122) (cid:125) consistency loss LINEBREAK where mse(·) is mean squared error and kl(·) is kullback-leibler divergence. r∗ is the sequence representation4 and p∗ is the predicted probability distribution of downstream tasks. the task loss ltask is the sum of the source language task loss ls weighted by the hyper-parameter α, which is utilized to balance the training process ltask = αls LINEBREAK task and target language one lt LINEBREAK task + (1 − α)lt LINEBREAK task. LINEBREAK task, LINEBREAK for classification, structured prediction, and span extraction tasks, the task loss is the cross-entropy loss (see details in appendix a.2). for structured prediction tasks, it is non-trivial to implement the token-level label mapping across different languages. thus we use the label probability distribution, predicted by the source language task model, as the pseudo-label for training, where tokens and labels are corresponding. LINEBREAK the consistency loss is composed of two parts: the representation consistency loss and the prediction consistency loss. the first loss is a regularization term and provides a way to align representations across different languages (ruder et al., 2019). the second loss is to make better use of the supervision of downstream tasks. it only exists in the classification task, as the translation process does not change the label of this task, while in other tasks, it does. LINEBREAK inference LINEBREAK during inference, the manifold mixup process is the same as training, except for the scheduled sampling process. concretely, for the source language, only translation data are available in the LINEBREAK 4we utilize the mean pooling of the last layer’s hidden states as the sequence representation, which is LINEBREAK independent of the sequence length. LINEBREAK table 3: main results on the xtreme benchmark. † denotes using other data augmentation strategy in addition to machine translation. ‡ denotes results from ruder et al. (2021), which is an updated version of hu et al. (2020). LINEBREAK model LINEBREAK metrics LINEBREAK pair sentence LINEBREAK structured prediction LINEBREAK xnli LINEBREAK paws-x pos LINEBREAK acc LINEBREAK acc LINEBREAK based on xlm-r-large xlm-r (hu et al., 2020) trans-train (wei et al., 2020) filter (fang et al., 2020) xtune (zheng et al., 2021) x-mixup LINEBREAK based on mbert mbert (hu et al., 2020) joint-align (zhao et al., 2020) trans-train (hu et al., 2020) x-mixup LINEBREAK ner LINEBREAK question answering mlqa LINEBREAK xquad LINEBREAK tydiqa LINEBREAK avg. LINEBREAK f1/em LINEBREAK f1/em LINEBREAK f1/em LINEBREAK inference stage, without real data, so we use xs ∈ ˜dtest s . for classification tasks, we synthesize the predictions of both the source and target sequences by taking the mean of the predicted probability distributions as the final prediction. for structured prediction and qa tasks, we only consider the prediction of the target sequence. LINEBREAK experiments
| 6
|
[
108.299,
428.3356768,
200.0834953,
440.2908768
] |
0-EYBhgw80y.pdf
| 2,021
| 1
|
LINEBREAK mopro: webly supervised learning with momentum prototypes LINEBREAK junnan li, caiming xiong, steven c.h. hoi salesforce research {junnan.li,cxiong,shoi}@salesforce.com LINEBREAK abstract LINEBREAK we propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. most existing works on weblysupervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. we propose momentum prototypes (mopro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. mopro achieves state-of-the-art performance on webvision, a weakly-labeled noisy dataset. mopro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. it outperforms the imagenet supervised pretrained model by +10.5 on 1-shot classification on voc, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1% of imagenet labeled samples. furthermore, mopro is more robust to distribution shifts. code and pretrained models are available at https://github.com/ salesforce/mopro. LINEBREAK introduction LINEBREAK large-scale datasets with human-annotated labels have revolutionized computer vision. supervised pretraining on imagenet (deng et al., 2009) has been the de facto formula of success for almost all state-of-the-art visual perception models. however, it is extremely labor intensive to manually annotate millions of images, which makes it a non-scalable solution. one alternative to reduce annotation cost is self-supervised representation learning, which leverages unlabeled data. however, self-supervised learning methods (goyal et al., 2019; he et al., 2019; chen et al., 2020a; li et al., 2020b) have yet consistently shown superior performance compared to supervised learning, especially when transferred to downstream tasks with limited labels. LINEBREAK with the help of commercial search engines, photo-sharing websites, and social media platforms, there is near-infinite amount of weakly-labeled images available on the web. several works have exploited the scalable source of web images and demonstrated promising results with weblysupervised representation learning (mahajan et al., 2018; sun et al., 2017; li et al., 2017; kolesnikov et al., 2020). however, there exists two competing claims on whether weakly-labeled noisy datasets lead to worse generalization performance. one claim argues that the effect of noise can be overpowered by the scale of data, and simply applies standard supervised learning method on web datasets (mahajan et al., 2018; sun et al., 2017; li et al., 2017; kolesnikov et al., 2020). the other claim argues that deep models can easily memorize noisy labels, resulting in worse generalization (zhang et al., 2017; ma et al., 2018). in this paper, we show that both claims are partially true. while increasing the size of data does improve the model’s robustness to noise, our method can substantially boost the representation learning performance by addressing noise. LINEBREAK there exists a large body of literature on learning with label noise (jiang et al., 2018; han et al., 2018; guo et al., 2018; tanaka et al., 2018; arazo et al., 2019; li et al., 2020a). however, existing methods have several limitations that make them less effective for webly-supervised representation learning. first, most methods do not consider out-of-distribution (ood) samples, which is a major LINEBREAK figure 1: illustration of the normalized embedding space learned with mopro. samples from the same class gather around their class prototype, whereas ood samples are separated from in-distribution samples. label correction and ood removal are achieved based on a sample’s distance with the prototypes. LINEBREAK source of noise in real-world web datasets. second, many methods perform computation-heavy procedures for noise cleaning (jiang et al., 2018; li et al., 2019; 2020a), or require access to a set of samples with clean labels (vahdat, 2017; veit et al., 2017; lee et al., 2018), which limit their scalability in practice. LINEBREAK we propose a new method for efficient representation learning from weakly-labeled web images. our method is inspired by recent developments in contrastive learning for self-supervised learning (he et al., 2019; chen et al., 2020a; li et al., 2020b) we introduce momentum prototypes (mopro), a simple component which is effective in label noise correction, ood sample removal, and representation learning. a visual explanation of our method is shown in figure 1. we use a deep network to project images into normalized low-dimensional embeddings, and calculate the prototype for a class as the moving-average embedding for clean samples in that class. we train the network such that embeddings are pulled closer to their corresponding prototypes, while pushed away from other prototypes. images with corrupted labels are corrected either as another class or as an ood sample based on their distance to the momentum prototypes. LINEBREAK we experimentally show that: LINEBREAK • mopro achieves state-of-the-art performance on the upstream weakly-supervised learning task. • mopro substantially improves representation learning performance when the pretrained model is transferred to downstream image classification and object detection tasks. for the first time, we show that weakly-supervised representation learning achieves similar performance as supervised representation learning, under the same data and computation budget. with a larger web dataset, mopro outperforms imagenet supervised learning by a large margin. LINEBREAK • mopro learns a more robust and calibrated model that generalizes better to distribution variations. LINEBREAK related work LINEBREAK webly-supervised representation learning LINEBREAK a number of prior works exploit large web datasets for visual representation learning (divvala et al., 2014; chen & gupta, 2015; joulin et al., 2016; mahajan et al., 2018; sun et al., 2017; li et al., 2017; kolesnikov et al., 2020). these datasets contain a considerable amount of noise. approximately 20% of the labels in the jmt-300m dataset (sun et al., 2017) are noisy, whereas 34% of images in the webvision dataset (li et al., 2017) are considered outliers. surprisingly, most prior works have chosen to ignore the noise and applied vanilla supervised method, with the claim that the scale of data can overpower the noise (mahajan et al., 2018; sun et al., 2017; li et al., 2017). however, we show that supervised method cannot fully harvest the power of large-scale weakly-labeled datasets. LINEBREAK our method achieves substantial improvement by addressing noise, and advances the potential of webly-supervised representation learning. LINEBREAK learning with label noise LINEBREAK learning with label noise has been widely studied. some methods require access to a small set of clean samples (xiao et al., 2015; vahdat, 2017; veit et al., 2017; lee et al., 2018; zhang et al., 2020), and other methods assume that no clean labels are available. there exist two major types of approaches. the first type performs label correction using predictions from the network (reed et al., 2015; ma et al., 2018; tanaka et al., 2018; yi & wu, 2019; yang et al., 2020). the second type separates clean samples from corrupted samples, and trains the model on clean samples (han et al., 2018; arazo et al., 2019; jiang et al., 2018; wang et al., 2018; chen et al., 2019; li et al., 2020a). however, existing methods have yet shown promising results for large-scale weakly-supervised representation learning. the main reasons include: (1) most methods do not consider ood samples, which commonly occur in real-world web datasets; (2) most methods are computational-heavy due to co-training (han et al., 2018; li et al., 2020a; jiang et al., 2018; 2020), iterative training (tanaka et al., 2018; yi & wu, 2019; wang et al., 2018; chen et al., 2019), or meta-learning (li et al., 2019; zhang et al., 2019). LINEBREAK different from existing methods, mopro achieves both label correction and ood sample removal on-the-fly with a single step, based on the similarity between an image embedding and the momentum prototypes. mopro also leverages contrastive learning to learn a robust embedding space. LINEBREAK self-supervised representation learning LINEBREAK self-supervised methods have been proposed for representation learning using unlabeled data. the recent developments in self-supervised representation learning can be attributed to contrastive learning. most methods (he et al., 2019; chen et al., 2020a; oord et al., 2018; wu et al., 2018) leverage the task of instance discrimination, where augmented crops from the same source image are enforced to have similar embeddings. prototypical contrastive learning (pcl) (li et al., 2020b) performs clustering to find prototypical embeddings, and enforces an image embedding to be similar to its assigned prototypes. different from pcl, we update prototypes on-the-fly in a weakly-supervised setting, where the momentum prototype of a class is the moving average of clean samples’ embeddings. furthermore, we jointly optimize two contrastive losses and a cross-entropy loss. LINEBREAK current self-supervised representation learning methods are limited in (1) inferior performance in low-shot task adaptation, (2) huge computation cost, and (3) inadequate to harvest larger datasets. we show that weakly-supervised learning with mopro addresses these limitations. LINEBREAK method LINEBREAK in this section, we delineate the details of our method. first, we introduce the components in our representation learning framework. then, we describe the loss functions. finally, we explain the noise correction procedure for label correction and ood sample removal. a pseudo-code of mopro is provided in appendix b. LINEBREAK representation learning framework LINEBREAK our proposed framework consists of the following components. figure 2 gives an illustration. • a noisy training dataset {(xi, yi)}n i=1, where xi is an image and yi ∈ {1, ..., k} is its class label. • a pseudo-label ˆyi for each image xi, which is its corrected label. details for generating the LINEBREAK pseudo-label is explained in sec 3.3. LINEBREAK • an encoder network, which maps an augmented image ˜xi to a representation vector vi ∈ rde. we experiment with resnet-50 (he et al., 2016) as the encoder, where the activations of the final global pooling layer (de = 2048) are used as the representation vector. LINEBREAK • a classifier (a fully-connected layer followed by softmax) which receives the representation vi as LINEBREAK input and outputs class predictions pi. LINEBREAK figure 2: proposed weakly-supervised learning framework. we jointly optimize a prototypical contrastive loss using momentum prototypes, an instance contrastive loss using momentum embeddings, and a cross-entropy loss using pseudo-labels. the pseudo-label for a sample is generated based on its original training label, the model’s prediction, and the sample’s distance to the prototypes. LINEBREAK • a projection network, which maps the representation vi into a low-dimensional embedding zi ∈ rdp (dp = 128). zi is always normalized to the unit sphere. following simclr (chen et al., 2020a), we use a mlp with one hidden layer as the projection network. LINEBREAK • momentum embeddings z(cid:48) LINEBREAK i generated by a momentum encoder. the momentum encoder has the same architecture as the encoder followed by the projection network, and its parameters are the moving-average of the encoder’s and the projection network’s parameters. same as in moco (he et al., 2019), we maintain a queue of momentum embeddings of past samples. LINEBREAK • momentum prototypes c ∈ rdp×k. the momentum prototype of the k-th class, ck, is the LINEBREAK normalized moving-average embedding for samples with pseudo-label ˆyi = k. LINEBREAK contrastive loss LINEBREAK as illustrated in figure 1, we aim to learn an embedding space where samples from the same class gather around its class prototype, while samples from different classes are seperated. we achieve it with two contrastive losses: (1) a prototypical contrastive loss lpro which increases the similarity between an embedding and its corresponding class prototype, (zi, cˆyi ), in contrast to other prototypes; (2) an instance contrastive loss lins which increases the similarity between two embeddings of the same source image, (zi, z(cid:48) i), in contrast to embeddings of other images. specifically, the contrastive losses are defined as: LINEBREAK li LINEBREAK pro = − log LINEBREAK exp(zi · cˆyi/τ ) k=1 exp(zi · ck/τ ) LINEBREAK li LINEBREAK ins = − log LINEBREAK exp(zi · z(cid:48) i/τ ) r=0 exp(zi · z(cid:48) LINEBREAK r/τ ) LINEBREAK where τ is a temperature parameter, and ˆyi is the pseudo-label. we use r negative momentum embeddings to construct the denominator of the instance contrastive loss. LINEBREAK we train the classifier with cross-entropy loss, using pseudo-labels as targets. LINEBREAK we jointly optimize the contrastive losses and the classification loss. the training objective is: LINEBREAK ce = − log(pˆyi li i ) LINEBREAK l = LINEBREAK (li LINEBREAK ce + λproli LINEBREAK pro + λinsli LINEBREAK ins) LINEBREAK for simplicity, we set λpro = λins = 1 for all experiments. LINEBREAK noise correction LINEBREAK we propose a simple yet effective method for online noise correction during training, which cleans label noise and removes ood samples. for each sample, we generate a soft pseudo-label qi by LINEBREAK combining the classifier’s output probability pi with si, a class probability distribution calculated using the sample’s similarity w.r.t the momentum prototypes: LINEBREAK qi = αpi + (1 − α)si, exp(zi · ck/τ ) k=1 exp(zi · ck/τ ) LINEBREAK sk i = LINEBREAK where the combination weight is simply set as α = 0.5 in all experiments. LINEBREAK we convert qi into a hard pseudo-label ˆyi based on the following rules: (1) if the highest score of qi is above certain threshold t , use the class with the highest score as the pseudo-label; (2) otherwise, if the score for the original label yi is higher than uniform probability, use yi as the pseudo-label; (3) otherwise, label it as an ood sample. LINEBREAK ˆyi = LINEBREAK arg maxk qk i yi ood LINEBREAK if maxk qk elseif qyi otherwise. LINEBREAK i > t , i > 1/k, LINEBREAK we remove ood samples from both the cross-entropy loss and the prototypical contrastive loss so that they do not affect class-specific learning, but include them in the instance contrastive loss to further separate them from in-distribution samples. examples of ood images and corrected pseudo-labels are shown in the appendices. LINEBREAK momentum prototypes LINEBREAK for each class k, we calculate its momentum prototype as a moving-average of the normalized embeddings for samples with pseudo-label k. specifically, we update ck by: LINEBREAK ck ← normalize(mck + (1 − m)zi), ∀i ∈ {i | ˆyi = k}, LINEBREAK where normalize(c) = c/ (cid:107)c(cid:107)2. the momentum coefficient m is set 0.999 in our experiments. LINEBREAK experiments LINEBREAK dataset for upstream training LINEBREAK we use the webvision (li et al., 2017) dataset as the noisy training data. it consists of images automatically crawled from google and flickr, using visual concepts from imagenet as queries. we experiment with three versions of webvision with different sizes: (1) webvision-v1.0 contains 2.44m images with the same classes as the imagenet-1k (ilsvrc 2012) dataset; (2) webvisionv0.5 is a randomly sampled subset of webvision-v1.0, which contains the same number of images (1.28m) as imagenet-1k; (3) webvision-v2.0 contains 16m images with 5k classes. LINEBREAK implementation details LINEBREAK we follow standard settings for imagenet training: batch size is 256; total number of epochs is 90; optimizer is sgd with a momentum of 0.9; initial learning rate is 0.1, decayed at 40 and 80 epochs; weight decay is 0.0001. we use resnet-50 (he et al., 2016) as the encoder. for moprospecific hyperparameters, we set τ = 0.1, α = 0.5, t = 0.8 (t = 0.6 for webvision-v2.0). the momentum for both the momentum encoder and momentum prototypes is set as 0.999. the queue to store momentum embeddings has a size of 8192. we apply standard data augmentation (crop and horizontal flip) to the encoder’s input, and stronger data augmentation (color changes in moco (he et al., 2019)) to the momentum encoder’s input. we warm-up the model for 10 epochs by training on all samples with original labels, before applying noise correction. LINEBREAK upstream task performance LINEBREAK in table 1, we compare mopro with existing weakly-supervised learning methods trained on webvision-v1.0, where mopro achieves state-of-the-art performance. since the training dataset LINEBREAK method LINEBREAK architecture LINEBREAK resnet-50 cross-entropy (tu et al., 2020) inceptionresnet-v2 mentornet (jiang et al., 2018) inception-v2 curriculumnet (guo et al., 2018) cleannet (lee et al., 2018) resnet-50 curriculumnet (guo et al., 2018; tu et al., 2020) resnet-50 resnet-50 som (tu et al., 2020) resnet-50 distill (zhang et al., 2020) LINEBREAK cross-entropy (decoupled) mopro (ours) LINEBREAK resnet-50 resnet-50 LINEBREAK webvision LINEBREAK imagenet LINEBREAK top-1 LINEBREAK top-5 LINEBREAK top-1 LINEBREAK top-5 LINEBREAK table 1: comparison with state-of-the-art methods on webvision-v1.0. numbers denote accuracy (%) on the clean webvision-v1.0 validation set and the ilsvrc 2012 validation set. cleannet (lee et al., 2018) and distill (zhang et al., 2020) require data with clean annotations. LINEBREAK has imbalanced number of samples per-class, inspired by kang et al. (2020), we perform the following decoupled training steps to re-balance the classifier: (1) pretrain the model with mopro; (2) perform noise correction on the training data using the pretrained model, following the method in section 3.3; (3) keep the pretrained encoder fixed and finetune the classifier on the cleaned dataset, using square-root data sampling (mahajan et al., 2018) which balances the classes. we retrain the classifier for 15 epochs, using a learning rate of 0.01 which is decayed at 5 and 10 epochs. surprisingly, we also find that a vanilla cross-entropy method with decoupled classifier re-balancing can also achieve competitive performance, outperforming most existing baselines. LINEBREAK transfer learning
| 5
|
[
108.299,
415.9626768,
237.8999467,
427.9178768
] |
ZDaSIkWT-AP.pdf
| 2,022
| 0
|
LINEBREAK case-based reasoning for better generalization in textual reinforcement learning LINEBREAK mattia atzeni ibm research, epfl [email protected] LINEBREAK shehzaad dhuliawala eth zürich [email protected] LINEBREAK keerthiram murugesan ibm research [email protected] LINEBREAK mrinmaya sachan eth zürich [email protected] LINEBREAK abstract LINEBREAK text-based games (tbg) have emerged as promising environments for driving research in grounded language understanding and studying problems like generalization and sample efficiency. several deep reinforcement learning (rl) methods with varying architectures and learning schemes have been proposed for tbgs. however, these methods fail to generalize efficiently, especially under distributional shifts. in a departure from deep rl approaches, in this paper, we propose a general method inspired by case-based reasoning to train agents and generalize out of the training distribution. the case-based reasoner collects instances of positive experiences from the agent’s interaction with the world in the past and later reuses the collected experiences to act efficiently. the method can be applied in conjunction with any existing on-policy neural agent in the literature for tbgs. our experiments show that the proposed approach consistently improves existing methods, obtains good out-of-distribution generalization, and achieves new state-of-the-art results on widely used environments. LINEBREAK introduction LINEBREAK text-based games (tbgs) have emerged as key benchmarks for studying how reinforcement learning (rl) agents can tackle the challenges of grounded language understanding, partial observability, large action spaces, and out-of-distribution generalization (hausknecht et al., 2020; ammanabrolu & riedl, 2019). while we have indeed made some progress on these fronts in recent years (ammanabrolu & hausknecht, 2020; adhikari et al., 2020; murugesan et al., 2021b;a), these agents are still very inefficient and suffer from insufficient generalization to novel environments. as an example, state-of-the-art agents require 5 to 10 times more steps than a human to accomplish even simple household tasks (murugesan et al., 2021b). as the agents are purely neural architectures requiring significant training experience and computation, they fail to efficiently adapt to new environments and use their past experiences to reason in novel situations. this is in stark contrast to human learning which is much more robust, efficient and generalizable (lake et al., 2017). LINEBREAK motivated by this fundamental difference in learning, we propose new agents that rely on case-based reasoning (cbr) (aamodt & plaza, 1994) to efficiently act in the world. cbr draws its foundations in cognitive science (schank, 1983; kolodner, 1983) and mimics the process of solving new tasks based on solutions to previously encountered similar tasks. concretely, we design a general cbr framework that enables an agent to collect instances of past situations which led to a positive reward (known as cases). during decision making, the agent retrieves the case most similar to the current situation and then applies it after appropriately mapping it to the current context. LINEBREAK the cbr agent stores past experiences, along with the actions it performed, in a case memory. in order to efficiently use these stored experiences, the agent should be able to represent relevant contextual information from the state of the game in a compact way, while retaining the property LINEBREAK figure 1: overview of the approach and architecture of the cbr agent. a memory stores actions that have been used in previous interactions. the context of the game is learned from the state knowledge graph using a graph attention mechanism. actions are retrieved from the memory based on this context representation and mapped to the current state. if no valid action is obtained using cbr, the algorithm falls back to a neural agent. LINEBREAK that contexts that require similar actions receive similar representations. we represent the state of the game as a knowledge graph (ammanabrolu & hausknecht, 2020) and we address these challenges by utilizing (a) a seeded message propagation that focuses only on a subset of relevant nodes and (b) vector quantization (ballard, 2000) to efficiently map similar contexts to similar discrete representations. vector quantization allows the model to significantly compress the context representations while retaining their semantics; thereby, allowing for a scalable implementation of cbr in rl settings. an illustration of the framework is shown in figure 1. LINEBREAK our experiments show that cbr can be used to consistently boost the performance of various onpolicy rl agents proposed in the literature for tbgs. we obtain a new state-of-the-art on the textworld commonsense (murugesan et al., 2021b) dataset and we achieve better or comparable scores on 24 of the 33 games in the jericho suite (hausknecht et al., 2020) compared to previous work. we also show that cbr agents are resilient to domain shifts and suffer only marginal drops in performance (6%) on out-of-distribution settings when compared to their counterparts (35%). LINEBREAK preliminaries LINEBREAK text-based games. text-based games (tbgs) provide a challenging environment where an agent can observe the current state of the game and act in the world using only the modality of text. the state of the game is hidden, so tbgs can be modeled as a partially observable markov decision process (pomdp) (s, a, o, t , e, r), where s is the set of states of the environment of the game, a is the natural language action space, o is the set of observations or sequences of words describing the current state, t are the conditional transition probabilities from one state to another, e are the conditional observation probabilities, r : s × a → r is the reward function, which maps a state and action to a scalar reward that the agent receives. LINEBREAK case-based reasoning. case-based reasoning (cbr) is the process of solving new problems based on the solution of previously seen similar problems. generally, cbr assumes access to a memory that stores past problems (known as cases) and their solutions. when a new problem is encountered, cbr will (i) retrieve a similar problem and its solution from memory; (ii) reuse the solution by mapping it to the current problem; (iii) revise the solution by testing it and checking whether it is a viable way to address the new problem; and (iv) retain the solution in memory if the adaptation to the new problem was successful. LINEBREAK case-based reasoning in reinforcement learning LINEBREAK this section introduces our framework inspired by cbr for improving generalization in tbgs. even though we situate our work in tbgs, it serves as a good starting point for applying cbr in more general rl settings. we consider an on-policy rl agent that, at any given time step t, has access to a memory mt, that can be used to retrieve previous experiences. the memory contains LINEBREAK key-value pairs, where the keys are a context representation of a game state and values are actions that were taken by the agent w.r.t to this context. as mentioned in section 2, case-based reasoning can be formalized as a four-step process. we describe our proposed methodology for each step below. algorithm 1 provides a detailed formalization of our approach. LINEBREAK retrieve. given the state of the game st and the valid actions at, we want to retrieve from the memory mt previous experiences that might be useful in decision-making at the current state. to this end, for each admissible action at ∈ at, we define a context selector ct = context(st, at). the context selector is an action-specific representation of the state, namely the portion of the state that is relevant to the execution of an action. we will explain later how the context selector is defined in our implementation. for each context ct, we retrieve from the memory the context-action pair (cm has maximum similarity with ct. we denote as δ = sim(ct, cm t ) ∈ [0, 1] the relevance score given to the retrieved action. only actions am t with a relevance score above a retriever threshold τ are retrieved from mt. we denote as am the final set of action-relevance pairs returned by the retriever, as shown in algorithm 1. LINEBREAK t ), such that cm t LINEBREAK , am LINEBREAK t LINEBREAK t LINEBREAK reuse. the goal of the reuse step is to adapt the actions retrieved from the memory based on the current state. this is accomplished by a reuse function, that is applied to each retrieved action to construct a set ˜at of candidate actions that should be applicable to the current state, each paired with a confidence level. LINEBREAK if any of the action candidates ˜at is revise. a valid action, then the one with the highest relevance δ is executed, otherwise a neural agent π is used to select the best action a(cid:63) t . we denote with rt = r(st, a(cid:63) t ) the obtained reward. note that π can be an existing agent for tbgs (murugesan et al., 2021c;b; ammanabrolu & hausknecht, 2020). LINEBREAK algorithm 1: cbr in text-based rl LINEBREAK • retrieve let ct = {context(st, at) | at ∈ at} be a set of context selectors for state st at time step t am t ← ∅ for ct ∈ ct do , am let (cm t t )∈mt sim(ct, cm t ) arg max(cm let δ = sim(ct, cm t ) if δ > τ then am LINEBREAK t ) = t ,am LINEBREAK t ∪ {(am t LINEBREAK t ← am LINEBREAK end LINEBREAK end LINEBREAK • reuse build a set of action candidates: ˜at = {reuse(am t (am t LINEBREAK , st, δ) | , δ) ∈ am t } LINEBREAK • revise if at ∩ ˜at (cid:54)= ∅ then LINEBREAK let a(cid:63) LINEBREAK t , δ(cid:63) = arg max˜at,δ∈ ˜at LINEBREAK else LINEBREAK a(cid:63) t = arg maxat∈at π(at|st) LINEBREAK end let rt = r(st, a(cid:63) time step t by executing action a(cid:63) t LINEBREAK t ) be the reward obtained at LINEBREAK • retain let c(cid:63) action a(cid:63) t t , a(cid:63) t = {(c(cid:63) if rt > 0 then LINEBREAK t = context(st, a(cid:63) LINEBREAK t ) be the context of LINEBREAK t ), . . . , (c(cid:63) LINEBREAK mt+1 ← mt ∪ retain(t ) LINEBREAK end LINEBREAK retain. finally, the retain step stores successful experiences as new cases in the memory, so that they can be retrieved in the future. in principle, this can be accomplished by storing actions for which the agent obtained positive rewards. however, we found that storing previous actions as well can result in improved performance. therefore, whenever rt > 0, a retain function is used to select which of the past executed actions and their contexts should be stored in the memory. in our experiments, the retain function selects the k most recent actions, but other implementations are possible, as discussed in appendix d. LINEBREAK a cbr policy agent to generalize in text-based games LINEBREAK designing an agent that can act efficiently in tbgs using the described approach poses several challenges. above all, efficient memory use is crucial to making the approach practical and scalable. since the context selectors are used as keys for accessing values in the memory, their representation LINEBREAK needs to be such that contexts where similar actions were taken receive similar representations. at the same time, as the state space is exponential, context representations need to be focused only on relevant portions of the state and they need to be compressed and compact. LINEBREAK representing the context through seeded graph attention LINEBREAK state space as a knowledge graph. following previous works (ammanabrolu & riedl, 2019; ammanabrolu & hausknecht, 2020; murugesan et al., 2021c), we represent the state of the game as a dynamic knowledge graph gt = (vt, rt, et), where a node v ∈ vt represents an entity in the game, r ∈ rt is a relation type, and an edge v r−→ v(cid:48) ∈ et represents a relation of type r ∈ rt between entities v, v(cid:48) ∈ vt. in tbgs, the space of valid actions at can be modeled as a templatebased action space, where actions at are instances of a finite set of templates with a given set of entities, denoted as vat ⊆ vt. as an example, the action “kill orc with sword” can be seen as an instance of the template “kill v1 with v2”, where v1 and v2 are “orc” and “sword” respectively. LINEBREAK seeded graph attention. the state graph gt and the entities vat are provided as input to the agent for each action at ∈ at, in order to build an action-specific contextualized representation of the state. a pre-trained bert model (devlin et al., 2019) is used to get a representation h(0) v ∈ rd for each node v ∈ vt. inspired by sun et al. (2018), we propose a seeded graph attention mechanism (gat), so that the propagation of messages is weighted more for nodes close to the entities vat. let α(l) vu denote the attention coefficients given by a graph attention network (velickovic et al., 2018) at layer l for nodes v, u ∈ vt. then, for each node v ∈ vt, we introduce a coefficient β(l) that scales v with the amount of messages received by node v at layer l: LINEBREAK |vat | 0 LINEBREAK if v ∈ vat otherwise LINEBREAK v + λ LINEBREAK u∈nv LINEBREAK vuβ(l) α(l) u , LINEBREAK where nv denotes the neighbors of v, considering the graph as undirected. note that, at layer l = 1, only the nodes in vat receive messages, whereas for increasing values of l, β(l) v will be non-zero for their (l − 1)-hop neighbors as well. the representation of each v ∈ vt is then updated as: LINEBREAK v = ffn(l) h(l) LINEBREAK h(l−1) v LINEBREAK + β(l) v LINEBREAK vuw(l)h(l−1) α(l) LINEBREAK u LINEBREAK u∈nv LINEBREAK where ffn(l+1) is a 2-layer feed-forward network with relu non-linearity and w(l) ∈ rd×d are learnable parameters. finally, we compute a final continuous contextualized representation cat of the state by summing the linear projections of the hidden representations of each v ∈ vat and passing the result through a further feed-forward network. LINEBREAK memory access through context quantization LINEBREAK given a continuous representation cat of the context, we need an efficient way to access the memory mt to retrieve or store actions based on such a context selector. storing and retrieving based on the continuous representation cat would be impractical for scalability reasons. additionally, since the parameters of the agent change over the training time, the same context would result in several duplicated entries in the memory even with a pre-trained agent over different episodes. LINEBREAK discretization of the context. to address these problems, we propose to use vector quantization (ballard, 2000) before reading or writing to memory. following previous work (chen et al., 2018; sachan, 2020), we learn a discretization function φ : rd → zd k, that maps the continuous representation cat into a k-way d-dimensional code ct ∈ zd k, with |zk| = k (we refer to ct as a kd code). with reference to section 3, then we will use ct = context(st, at) = φ(cat) as the context selector used to access the memory mt. in order to implement the discretization function, we define a set of k key vectors ki ∈ rd, i = 1, . . . , k, and we divide each vector in d partitions kj i ∈ rd/d, j = 1, . . . , d. similarly, we divide cat in d partitions cj ∈ rd/d, j = 1, . . . , d. at then, we compute the j-th code zj of ct by nearest neighbor search, as zj = arg mini (cid:107)cj i (cid:107)2 2. at we use the straight-through estimator (bengio et al., 2013) to address the non differentialbility of the argmin operator. LINEBREAK − kj LINEBREAK memory access. the kd codes introduced above are used to provide a memory-efficient representation of the keys in the memory. then, given the kd code representing the current context selector ct, we query the memory by computing a similarity measure sim(ct, cm t ) between ct and each cm . the context-action pair with the highest similarity is returned as a result of the memory access, together with a relevance score δ representing the value of the similarity measure. LINEBREAK in mt. the similarity function is defined as the fraction of codes shared by ct and cm LINEBREAK t LINEBREAK t LINEBREAK symbolic action reuse and revise policy LINEBREAK we use a simple purely symbolic reuse function to adapt the actions retrieved from the memory to the current state. let ct be the context selector computed based on state st and the entities vat, as explained in sections 4.1 and 4.2. denote with (cm , am t ) the context-action pair retrieved from mt t with confidence δ. then, the reuse function reuse(am , st, δ) constructs the action candidate ˜at as the action with the same template as am applied to the entities vat. if the reuse step cannot generate a valid action, we revert to the neural policy agent π that outputs a probability distribution over the current admissible actions at. LINEBREAK t LINEBREAK t LINEBREAK training LINEBREAK in section 3, we have introduced an on-policy rl agent that relies on case-based reasoning to act in the world efficiently. this agent can be trained in principle using any online rl method. this section discusses the training strategies and learning objectives used in our implementation. LINEBREAK objective. two main portions of the model need to be trained: (a) the retriever, namely the neural network that computes the context representation and accesses the memory through its discretization, and (b) the main neural agent π which is used in the revise step. all agents π used in our experiments are trained with an advantage actor-critic (a2c) method. for optimizing the parameters of π, we use the same learning objectives defined by adolphs & hofmann (2019), as described in appendix a. whenever the executed action a(cid:63) t is not chosen by the model π but it comes from the symbolic reuse step, then we optimize instead an additional objective for the retriever, namely the following contrastive loss (hadsell et al., 2006): LINEBREAK l(t) LINEBREAK r = LINEBREAK (1 − yt)(1 − sim(ct, cm LINEBREAK yt max{0, µ − 1 + sim(ct, cm LINEBREAK where ct denotes the context selector of the action executed at time step t, cm is the corresponding key entry retrieved from mt, µ is the margin parameter of the contrastive loss, and yt = 1 if rt > 0, yt = 0 otherwise. this objective encourages the retriever to produce similar representations for two contexts where reusing an action yielded a positive reward. LINEBREAK t LINEBREAK pretraining. to make learning more stable and allow the agent to act more efficiently, we found it beneficial to pretrain the retriever. this minimizes large shifts in the context representations over the training time. we run a baseline agent (ammanabrolu & hausknecht, 2020) to collect instances of the state graph and actions that yielded positive rewards. then we train the retriever to encode to similar representations the contexts for which similar actions (i.e., actions with the same template) were used. this is achieved using the same contrastive loss defined above. LINEBREAK experiments LINEBREAK this section provides a detailed evaluation of our approach. we assess quantitatively the performance of cbr combined with existing rl approaches and we demonstrate its capability to improve sample efficiency and generalize out of the training distribution. next, we provide qualitative insights and examples of the behavior of the model and we perform an ablation study to understand the role played by the different components of the architecture. LINEBREAK experimental setup LINEBREAK agents. we consider several agents obtained by plugging existing rl methods in the revise step. we first define two simple approaches: cbr-only, where we augment a random policy with the LINEBREAK easy LINEBREAK medium LINEBREAK hard LINEBREAK #steps LINEBREAK norm. score LINEBREAK #steps LINEBREAK norm. score LINEBREAK #steps LINEBREAK norm. score LINEBREAK text tpc kg-a2c bike LINEBREAK cbr-only text + cbr tpc + cbr kg-a2c + cbr bike + cbr LINEBREAK table 1: test-set performance for twc in-distribution games LINEBREAK easy LINEBREAK medium LINEBREAK hard LINEBREAK #steps LINEBREAK norm. score LINEBREAK #steps LINEBREAK norm. score LINEBREAK #steps LINEBREAK norm. score LINEBREAK text tpc kg-a2c bike LINEBREAK cbr-only text + cbr tpc + cbr kg-a2c + cbr bike + cbr LINEBREAK table 2: test-set performance for twc out-of-distribution games LINEBREAK cbr approach, and text + cbr, which relies on the cbr method combined with a simple grubased policy network that consumes as input the textual observation from the game. next, we select three recently proposed tbg approaches: text+commonsense (tpc) (murugesan et al., 2021b), kg-a2c (ammanabrolu & hausknecht, 2020), and bike (murugesan et al., 2021c), to create the tpc + cbr, kg-a2c + cbr and bike + cbr agents. we treat the original agents as baselines. LINEBREAK datasets. we empirically verify the efficacy of our approach on textworld commonsense (twc) (murugesan et al., 2021b) and jericho (hausknecht et al., 2020). jericho is a well-known and challenging learning environment including 33 interactive fiction games. twc is an environment which builds on textworld (côté et al., 2018) and provides a suite of games requiring commonsense knowledge. twc allows agents to be tested on two settings: the in-distribution games, where the objects that the agent encounters in the test set are the same as the objects in the training set, and the out-of-distribution games which have no entity in common with the training set. for each of these settings, twc provides three difficulty levels: easy, medium, and hard. LINEBREAK evaluation metrics. following murugesan et al. (2021b), we evaluate the agents on twc based on the number of steps (#steps) required to achieve the goal (lower is better) and the normalized cumulative reward (norm. score) obtained by the agent (larger is better). on jericho, we follow previous work (hausknecht et al., 2020; guo et al., 2020; ammanabrolu & hausknecht, 2020) and we report the average score achieved over the last 100 training episodes. LINEBREAK results on textworld commonsense LINEBREAK table 1 reports the results on twc for the in-distribution set of games. overall, we observe that cbr consistently improves the performance of all the baselines. the performance boost is large enough that even a simple method as text + cbr outperforms all considered baselines except bike. LINEBREAK out-of-distribution generalization. cbr’s ability to retrieve similar cases should allow our method to better generalize to new and unseen problems. we test this hypothesis on the out-of LINEBREAK figure 2: performance on twc (showing mean and standard deviation averaged over 5 runs) for the three difficulty levels: easy (left), medium (middle), hard (right) using normalized score and number of steps. LINEBREAK distribution games in twc. the results of this experiment are reported in table 2. we notice that all existing approaches fail to generalize out of the training distribution and suffer a substantial drop in performance in this setting. however, when coupled with cbr, the drop is minor (on average 6% with cbr vs 35% without on the hard level). interestingly, even the cbr-only agent achieves competitive results compared to the top-performing baselines. LINEBREAK sample efficiency. another key benefit of our approach comes as better sample efficiency. with its ability to explicitly store prior solutions effectively, cbr allows existing algorithms to learn faster. figure 2 shows the learning curves for our best agents and the corresponding baselines. the plots report the performance of the agent over the training episodes, both in terms of the number of steps and the normalized score. overall, we observe that the cbr agents obtain faster convergence to their counterparts on all difficulty levels. LINEBREAK performance on the jericho games LINEBREAK we evaluate our best performing variant from the experiments on twc (bike + cbr) against existing approaches on the 33 games in the jericho environment. we compare our approach against strong baselines, including tdqn (hausknecht et al., 2020), drrn (he et al., 2016), kg-a2c (ammanabrolu & hausknecht, 2020), mprc-dqn (guo et al., 2020), and rc-dqn (guo et al., 2020). the same experimental setting and handicaps as the baselines are used, as we train for 100 000 steps and we assume access to valid actions. table 3 summarizes the results of the jericho games. we observe that our cbr agent achieves comparable or better performance than any baseline on 24 (73%) of the games, strictly outperforming all the other agents in 18 games. LINEBREAK qualitative analysis and ablation studies LINEBREAK insights on the model. figure 3 provides two examples showing the bike + cbr agent interacting with the zork1 game. in the example on top, the agent retrieves an experience that can be successfully reused and turned into a valid action at the current time step. the heat maps visualize the value of the context similarity function defined in section 4 for the top entries in the memory. in the negative example at the bottom instead, the agent retrieves an action that is not useful and needs to fall back to the neural policy π. figure 4 (top) shows the fraction of times that actions retrieved from the memory are reused successfully in the twc games. we observe that, both for in-distribution and out-of-distribution games, the trained agent relies on cbr from 60% to approximately 70% of the times. figure 4 (bottom) further shows the fraction of times that the neural agent would have been able to select a rewarded action as well, when the cbr reuses a successful action. the plot shows that, for the out-of-distribution games, the neural agent would struggle to select good actions when the cbr is used. LINEBREAK game LINEBREAK 905 acorncourt adventureland afflicted awaken detective dragon inhumane library moonlit omniquest pentari reverb snacktime temple ztuu advent balances deephome gold jewel karn ludicorp yomomma zenon zork1 zork3 anchor enchanter sorcerer spellbrkr spirit tryst205 LINEBREAK best agent LINEBREAK human (max) LINEBREAK human (walkthrough-100) LINEBREAK tdqn LINEBREAK drrn kg-a2c mprc-dqn rc-dqn bike + cbr LINEBREAK table 3: average raw score on the jericho games. we denote with colors the difficulty of the games (green for possible games, yellow for difficult games and red for extreme games). the last row reports the fraction and the absolute number of games where an agent achieves the best score. we additionally report human performance (human – max) and the 100-step results from a human-written walkthrough (human – walkthrough 100). results are taken from the original papers or “−” is used if a result was not reported. LINEBREAK figure 3: examples from the zork1 game, showing the content of the memory and the context similarities, in a situation where the agent is able to reuse a previous experience and in a case where the revise step is needed. LINEBREAK figure 4: fraction of times that a retrieved action is reused successfully on twc (top). fraction of times that the neural agent would have picked a rewarded action when cbr is used successfully (bottom). LINEBREAK ablation studies. in order to understand the role of the main modules of our cbr agent, we designed some ablation studies. first, instead of using the seeded gat, we define the context of a state-action pair context(st, at) as just one of the entities that at is applied to. this definition suits well the twc games because rewarded actions are always applied to one target object and a location for that object (see appendix g for details). note that, since the set of entities is discrete, no context quantization is needed. we report the performance of the resulting bike + cbr (w/o gat) agent in LINEBREAK easy LINEBREAK medium LINEBREAK hard LINEBREAK n bike + cbr (w/o gat) bike + cbr (w/o vq) LINEBREAK i LINEBREAK t bike + cbr (w/o gat) u o LINEBREAK bike + cbr (w/o vq) LINEBREAK table 4: results of the ablation study on twc, evaluated based on the number of steps (#steps) to solve the games. LINEBREAK figure 5: number of entries in the memory over training. LINEBREAK table 4. the results show that cbr on twc is effective even with this simple context definition, but the lower performance of the agent demonstrates the advantage of incorporating additional context information. finally, we investigate the role played by vector quantization, by experimenting with an agent (bike + cbr w/o vq) that stores the continuous context representations. in general, this poses scalability challenges, but since twc has only 5 games per difficulty level, each with a small number of objects, we were able to evaluate the performance of this agent on the three levels separately. the results, reported in table 4, show that this agent performs much worse than the other cbr implementations. this happens because storing continuous representations over the training results in duplicated entries in the memory and makes it harder to retrieve meaningful experiences. figure 5 demonstrates how the size (number of entries) in the memory grows over the training time. in this experiment, we trained the agent on all difficulty levels at the same time, resulting in the implementation running out of memory (oom) on the gpu. more ablation studies are reported in appendix c, d, e, f, and g. LINEBREAK related work LINEBREAK text-based rl. tbgs are a rich domain for studying grounded language understanding and how text information can be utilized in control. prior work has explored text-based rl to learn strategies for multi-user dungeon games (narasimhan et al., 2015) and other environments (branavan et al., 2012). zahavy et al. (2018) proposed the action-elimination deep q-network (ae-dqn), which learns to predict invalid actions in the text-adventure game zork. recently, côté et al. (2018) introduced textworld, a sandbox learning environment for training and evaluating rl agents on text-based games. on the same line, murugesan et al. (2021b) introduced twc that requires agents with commonsense knowledge (murugesan et al., 2020; basu et al., 2021). the ledeepchef system (adolphs & hofmann, 2019) achieved good results on the first textword problems (trischler et al., 2019) by supervising the model with entities from freebase, allowing the agent to generalize to unseen objects. a recent line of work learns symbolic (typically graph-structured) representations of the agent’s belief. notably, ammanabrolu & riedl (2019) proposed kg-dqn and adhikari et al. (2020) proposed gata. we also use graphs to model the state of the game. LINEBREAK case-based reasoning in rl. in the context of rl, cbr has been used to speed up and improve transfer learning in heuristic-based rl. celiberto jr et al. (2011) and bianchi et al. (2018) have shown that cases collected from one domain can be used as heuristics to achieve faster convergence when learning an rl algorithm on a different domain. in contrast to these works, we present a scalable way of using cbr alongside deep rl methods in settings with very large state spaces. more recently, cbr has been successfully applied in the field of knowledge-based reasoning. das et al. (2020) and das et al. (2021) show that cbr can effectively learn to generate new logical reasoning chains from prior cases, to answer questions on knowledge graphs. LINEBREAK conclusion and future work LINEBREAK in this work, we proposed new agents for tbgs using case-based reasoning. in contrast to expensive deep rl approaches, cbr simply builds a collection of its past experiences and uses the ones relevant to the current situation to decide upon its next action in the game. our experiments showed that cbr when combined with existing rl agents can make them more efficient and aid generalization in out-of-distribution settings. even though cbr was quite successful in the tbgs explored in our work, future work is needed to understand the limitations of cbr in such settings. LINEBREAK acknowledgements LINEBREAK this work was funded in part by an ibm fellowship to sd and in part by a small project grant to ms from the hasler foundation. LINEBREAK references LINEBREAK agnar aamodt and enric plaza. case-based reasoning: foundational issues, methodological LINEBREAK variations, and system approaches. ai communications, 7(1):39–59, 1994. LINEBREAK ashutosh adhikari, xingdi yuan, marc-alexandre côté, mikuláš zelinka, marc-antoine rondeau, romain laroche, pascal poupart, jian tang, adam trischler, and william l hamilton. learning dynamic knowledge graphs to generalize on text-based games. arxiv preprint arxiv:2002.09127, 2020. LINEBREAK leonard adolphs and thomas hofmann. ledeepchef: deep reinforcement learning agent for LINEBREAK families of text-based games. arxiv, abs/1909.01646, 2019. LINEBREAK prithviraj ammanabrolu and matthew hausknecht. graph constrained reinforcement learning for LINEBREAK natural language action spaces. arxiv preprint arxiv:2001.08837, 2020. LINEBREAK prithviraj ammanabrolu and mark riedl. playing text-adventure games with graph-based deep reinforcement learning. in proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 3557–3565, 2019. LINEBREAK prithviraj ammanabrolu, ethan tien, matthew j. hausknecht, and mark o. riedl. how to avoid being eaten by a grue: structured exploration strategies for textual worlds. corr, abs/2006.07409, 2020. url https://arxiv.org/abs/2006.07409. LINEBREAK mattia atzeni and maurizio atzori. translating natural language to code: an unsupervised ontology-based approach. in first ieee international conference on artificial intelligence and knowledge engineering, aike 2018, laguna hills, ca, usa, september 26-28, 2018, pp. 1–8. ieee computer society, 2018. doi: 10.1109/aike.2018.00009. url https://doi.org/ 10.1109/aike.2018.00009. LINEBREAK mattia atzeni, jasmina bogojeska, and andreas loukas. sqaler: scaling question answering by decoupling multi-hop and logical reasoning. in a. beygelzimer, y. dauphin, p. liang, and j. wortman vaughan (eds.), advances in neural information processing systems, 2021. url https://openreview.net/forum?id=2cqq_c1i0b. LINEBREAK dzmitry bahdanau, shikhar murty, michael noukhovitch, thien huu nguyen, harm de vries, and aaron c. courville. systematic generalization: what is required and can it be learned? in iclr 2019, 2019. LINEBREAK dana h. ballard. an introduction to natural computation. complex adaptive systems. mit press, LINEBREAK kinjal basu, keerthiram murugesan, mattia atzeni, pavan kapanipathi, kartik talamadupula, tim klinger, murray campbell, mrinmaya sachan, and gopal gupta. a hybrid neuro-symbolic approach for text-based games using inductive logic programming. in combining learning and reasoning: programming languages, formalisms, and representations, 2021. LINEBREAK yoshua bengio, nicholas léonard, and aaron c. courville. estimating or propagating gradients through stochastic neurons for conditional computation. corr, abs/1308.3432, 2013. url http://arxiv.org/abs/1308.3432. LINEBREAK reinaldo ac bianchi, paulo e santos, isaac j da silva, luiz a celiberto, and ramon lopez de mantaras. heuristically accelerated reinforcement learning by means of case-based reasoning and transfer learning. journal of intelligent & robotic systems, 91(2):301–312, 2018. LINEBREAK srk branavan, david silver, and regina barzilay. learning to win by reading manuals in a monte LINEBREAK carlo framework. journal of artificial intelligence research, 43:661–704, 2012. LINEBREAK luiz a celiberto jr, jackson p matsuura, ramon lopez de mantaras, and reinaldo ac bianchi. using cases as heuristics in reinforcement learning: a transfer learning application. in twentysecond international joint conference on artificial intelligence, 2011. LINEBREAK ting chen, martin renqiang min, and yizhou sun. learning k-way d-dimensional discrete codes for compact embedding representations. in jennifer g. dy and andreas krause (eds.), proceedings of the 35th international conference on machine learning, icml 2018, stockholmsmässan, stockholm, sweden, july 10-15, 2018, volume 80 of proceedings of machine learning research, pp. 853–862. pmlr, 2018. url http://proceedings.mlr.press/v80/chen18g. html. LINEBREAK marc-alexandre côté, ákos kádár, xingdi yuan, ben kybartas, tavian barnes, emery fine, james moore, matthew hausknecht, layla el asri, mahmoud adada, wendy tay, and adam trischler. textworld: a learning environment for text-based games. corr, abs/1806.11532, 2018. LINEBREAK rajarshi das, ameya godbole, shehzaad dhuliawala, manzil zaheer, and andrew mccallum. a simple approach to case-based reasoning in knowledge bases. arxiv preprint arxiv:2006.14198, 2020. LINEBREAK rajarshi das, manzil zaheer, dung thai, ameya godbole, ethan perez, jay-yoon lee, lizhen tan, lazaros polymenakos, and andrew mccallum. case-based reasoning for natural language queries over knowledge bases. arxiv preprint arxiv:2104.08762, 2021. LINEBREAK jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. bert: pre-training of deep bidirectional transformers for language understanding. in jill burstein, christy doran, and thamar solorio (eds.), proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, naacl-hlt 2019, minneapolis, mn, usa, june 2-7, 2019, volume 1 (long and short papers), pp. 4171– 4186. association for computational linguistics, 2019. url https://www.aclweb.org/ anthology/n19-1423/. LINEBREAK xiaoxiao guo, mo yu, yupeng gao, chuang gan, murray campbell, and shiyu chang. interactive fiction game playing as multi-paragraph reading comprehension with reinforcement learning. in proceedings of the 2020 conference on empirical methods in natural language processing (emnlp), pp. 7755–7765, online, november 2020. association for computational linguistics. doi: 10.18653/v1/2020.emnlp-main.624. url https://aclanthology.org/2020. emnlp-main.624. LINEBREAK raia hadsell, sumit chopra, and yann lecun. dimensionality reduction by learning an invariant in 2006 ieee computer society conference on computer vision and pattern mapping. recognition (cvpr 2006), 17-22 june 2006, new york, ny, usa, pp. 1735–1742. ieee computer society, 2006. doi: 10.1109/cvpr.2006.100. url https://doi.org/10.1109/cvpr. 2006.100. LINEBREAK matthew hausknecht, prithviraj ammanabrolu, marc-alexandre côté, and xingdi yuan. interactive in proceedings of the aaai conference on artificial LINEBREAK fiction games: a colossal adventure. intelligence, volume 34, pp. 7903–7910, 2020. LINEBREAK ji he, jianshu chen, xiaodong he, jianfeng gao, lihong li, li deng, and mari ostendorf. deep reinforcement learning with a natural language action space. in proceedings of the 54th annual meeting of the association for computational linguistics, acl 2016, august 7-12, 2016, berlin, germany, volume 1: long papers. the association for computer linguistics, 2016. doi: 10. 18653/v1/p16-1153. url https://doi.org/10.18653/v1/p16-1153. LINEBREAK pavan kapanipathi, veronika thost, siva sankalp patel, spencer whitehead, ibrahim abdelaziz, avinash balakrishnan, maria chang, kshitij fadnis, chulaka gunasekara, bassem makni, nicholas mattei, kartik talamadupula, and achille fokoue. infusing knowledge into the textual entailment task using graph convolutional networks. aaai, 2020. LINEBREAK daniel keysers, nathanael schärli, nathan scales, hylke buisman, daniel furrer, sergii kashubin, nikola momchev, danila sinopalnikov, lukasz stafiniak, tibor tihon, dmitry tsarkov, xiao wang, marc van zee, and olivier bousquet. measuring compositional generalization: a LINEBREAK in 8th international conference on learning comprehensive method on realistic data. representations, iclr 2020, addis ababa, ethiopia, april 26-30, 2020. openreview.net, 2020. url https://openreview.net/forum?id=sygccnnkwr. LINEBREAK janet l kolodner. reconstructive memory: a computer model. cognitive science, 7(4):281–328, LINEBREAK brenden m. lake, tomer d. ullman, joshua b. tenenbaum, and samuel j. gershman. building machines that learn and think like people. behavioral and brain sciences, 40:e253, 2017. doi: 10.1017/s0140525x16001837. LINEBREAK k. murugesan, subhajit chaudhury, and kartik talamadupula. eye of the beholder: improved relation generalization for text-based reinforcement learning agents. arxiv, abs/2106.05387, 2021a. LINEBREAK keerthiram murugesan, mattia atzeni, pushkar shukla, mrinmaya sachan, pavan kapanipathi, and kartik talamadupula. enhancing text-based reinforcement learning agents with commonsense knowledge. corr, abs/2005.00811, 2020. url https://arxiv.org/abs/2005.00811. LINEBREAK keerthiram murugesan, mattia atzeni, pavan kapanipathi, pushkar shukla, sadhana kumaravel, gerald tesauro, kartik talamadupula, mrinmaya sachan, and murray campbell. text-based rl agents with commonsense knowledge: new challenges, environments and baselines. in thirty fifth aaai conference on artificial intelligence, 2021b. LINEBREAK keerthiram murugesan, mattia atzeni, pavan kapanipathi, kartik talamadupula, mrinmaya sachan, and murray campbell. efficient text-based reinforcement learning by jointly leveraging in proceedings of the 59th annual meeting of state and commonsense graph representations. the association for computational linguistics and the 11th international joint conference on natural language processing (volume 2: short papers), pp. 719–725, 2021c.
| 11
|
[
108,
412.1240784,
504.0037874,
465.9216784
] |
jLoC4ez43PZ.pdf
| 2,021
| 2
|
LINEBREAK graphcodebert: pre-training code representations with data flow LINEBREAK daya guo1∗, shuo ren2∗, shuai lu3∗, zhangyin feng4∗, duyu tang5, shujie liu5, long zhou5, nan duan5, alexey svyatkovskiy6, shengyu fu6, michele tufano6, shao kun deng6, colin clement6, dawn drain6, neel sundaresan6, jian yin1, daxin jiang7, and ming zhou5 1school of computer science and engineering, sun yat-sen university. 2beihang university, 3peking university, 4harbin institute of technology, 5microsoft research asia, 6microsoft devdiv, 7microsoft stca LINEBREAK abstract LINEBREAK pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. however, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. we present graphcodebert, a pre-trained model for programming language that considers the inherent structure of code. instead of taking syntactic-level structure of code like abstract syntax tree (ast), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of “wherethe-value-comes-from” between variables. such a semantic-level structure is less complex and does not bring an unnecessarily deep hierarchy of ast, the property of which makes the model more efficient. we develop graphcodebert based on transformer. in addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. one is to predict code structure edges, and the other is to align representations between source code and code structure. we implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. we evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. results show that code structure and newly introduced pre-training tasks can improve graphcodebert and achieves state-of-the-art performance on the four downstream tasks. we further show that the model prefers structure-level attentions over token-level attentions in the task of code search.1 LINEBREAK introduction LINEBREAK pre-trained models such as elmo (peters et al., 2018), gpt (radford et al., 2018) and bert (devlin et al., 2018) have led to strong improvement on numerous natural language processing (nlp) tasks. these pre-trained models are first pre-trained on a large unsupervised text corpus, and then fine-tuned on downstream tasks. the success of pre-trained models in nlp also promotes the development of pre-trained models for programming language. existing works (kanade et al., 2019; karampatsis & sutton, 2020; feng et al., 2020; svyatkovskiy et al., 2020; buratti et al., 2020) regard a source code as a sequence of tokens and pre-train models on source code to support code-related tasks such as code search, code completion, code summarization, etc. however, previous works only utilize source code for pre-training, while ignoring the inherent structure of code. such code structure provides useful semantic information of code, which would benefit the code understanding process. taking the expression v = max value − min value as an example, v is computed from max value and min value. programmers do not always follow the naming conventions so that it’s hard to understand the semantic of the variable v only from its name. the semantic structure of code provides a way to understand the semantic of the variable v by leveraging dependency relation between variables. LINEBREAK ∗work done while this author was an intern at microsoft research asia. contact: daya guo LINEBREAK ([email protected]) LINEBREAK 1all the codes and data are available at https://github.com/microsoft/codebert. LINEBREAK in this work, we present graphcodebert, a pre-trained model for programming language that considers the inherent structure of code. instead of taking syntactic-level structure of code like abstract syntax tree (ast), we leverage semantic-level information of code, i.e. data flow, for pretraining. data flow is a graph, in which nodes represent variables and edges represent the relation of “where-the-value-comes-from” between variables. compared with ast, data flow is less complex and does not bring an unnecessarily deep hierarchy, the property of which makes the model more efficient. in order to learn code representation from source code and code structure, we introduce two new structure-aware pre-training tasks. one is data flow edges prediction for learning representation from code structure, and the other is variable-alignment across source code and data flow for aligning representation between source code and code structure. graphcodebert is based on transformer neural architecture (vaswani et al., 2017) and we extend it by introducing a graph-guided masked attention function to incorporate the code structure. LINEBREAK we pre-train graphcodebert on the codesearchnet dataset (husain et al., 2019), which includes 2.3m functions of six programming languages paired with natural language documents. we evaluate the model on four downstream tasks: natural language code search, clone detection, code translation, and code refinement. experiments show that our model achieves state-of-the-art performance on the four tasks. further analysis shows that code structure and newly introduced pre-training tasks can improve graphcodebert and the model has consistent preference for attending data flow. LINEBREAK in summary, the contributions of this paper are: (1) graphcodebert is the first pre-trained model that leverages semantic structure of code to learn code representation. (2) we introduce two new structure-aware pre-training tasks for learning representation from source code and data flow. (3) graphcodebert provides significant improvement on four downstream tasks, i.e. code search, clone detection, code translation, and code refinement. LINEBREAK related works LINEBREAK pre-trained models for programming languages inspired by the big success of pre-training in nlp (devlin et al., 2018; yang et al., 2019; liu et al., 2019; raffel et al., 2019), pre-trained models for programming languages also promotes the development of code intelligence (kanade et al., 2019; feng et al., 2020; karampatsis & sutton, 2020; svyatkovskiy et al., 2020; buratti et al., 2020). kanade et al. (2019) pre-train a bert model on a massive corpus of python source codes by masked language modeling and next sentence prediction objectives. feng et al. (2020) propose codebert, a bimodal pre-trained model for programming and natural languages by masked language modeling and replaced token detection to support text-code tasks such as code search. karampatsis & sutton (2020) pre-train contextual embeddings on a javascript corpus using the elmo framework for program repair task. svyatkovskiy et al. (2020) propose gpt-c, which is a variant of the gpt-2 trained from scratch on source code data to support generative tasks like code completion. buratti et al. (2020) present c-bert, a transformer-based language model pre-trained on a collection of repositories written in c language, and achieve high accuracy in the abstract syntax tree (ast) tagging task. LINEBREAK different with previous works, graphcodebert is the first pre-trained model that leverages code structure to learn code representation to improve code understanding. we further introduce a graphguided masked attention function to incorporate the code structure into transformer and two new structure-aware pre-training tasks to learn representation from source code and code structure. LINEBREAK neural networks with code structure in recent years, some neural networks leveraging code structure such as ast have been proposed and achieved strong performance in code-related tasks like code completion (li et al., 2017; alon et al., 2019; kim et al., 2020), code generation (rabinovich et al., 2017; yin & neubig, 2017; brockschmidt et al., 2018), code clone detection (wei & li, 2017; zhang et al., 2019; wang et al., 2020), code summarization (alon et al., 2018; hu et al., 2018) and so on (nguyen & nguyen, 2015; allamanis et al., 2018; hellendoorn et al., 2019). nguyen & nguyen (2015) propose an ast-based language model to support the detection and suggestion of a syntactic template at the current editing location. allamanis et al. (2018) use graphs to represent programs and graph neural network to reason over program structures. hellendoorn et al. (2019) propose two different architectures using a gated graph neural network and transformers for combining local and global information to leverage richly structured representations of source code. however, these LINEBREAK works leverage code structure to learn models on specific tasks from scratch without using pre-trained models. in this work, we study how to leverage code structure for pre-training code representation. LINEBREAK data flow LINEBREAK in this section, we describe the basic concept and extraction of data flow. in next section, we will describe how to use data flow for pre-training. LINEBREAK data flow is a graph that represents dependency relation between variables, in which nodes represent variables and edges represent where the value of each variable comes from. unlike ast, data flow is same under different abstract grammars for the same source code. such code structure provides crucial code semantic information for code understanding. taking v = max value − min value as an example, programmers do not always follow the naming conventions so that it is hard to understand the semantic of the variable. data flow provides a way to understand the semantic of the variable v to some extent, i.e. the value of v comes from max value and min value in data flow. besides, data flow supports the model to consider long-range dependencies induced by using the same variable or function in distant locations. taking figure 1 as an example, there are four variables with same name (i.e. x3, x7, x9 and x11) but with different semantic. the graph in the figure shows dependency relation between these variables and supports x11 to pay more attention to x7 and x9 instead of x3. next, we describe how to extract data flow from a source code. LINEBREAK figure 1: the procedure of extracting data flow given a source code. the graph in the rightmost is data flow that represents the relation of ”where-the-value-comes-from” between variables. LINEBREAK figure 1 shows the extraction of data flow through a source code. given a source code c = {c1, c2, ..., cn}, we first parse the code into an abstract syntax tree (ast) by a standard compiler tool2. the ast includes syntax information of the code and terminals (leaves) are used to identify the variable sequence, denoted as v = {v1, v2, ..., vk}. we take each variable as a node of the graph and an direct edge ε = (cid:104)vi, vj(cid:105) from vi to vj refers that the value of j-th variable comes from i-th variable. taking x = expr as an example, edges from all variables in expr to x are added into the graph. we denote the set of directed edges as e = {ε1, ε2, ..., εl} and the graph g(c) = (v, e) is data flow used to represent dependency relation between variables of the source code c. LINEBREAK graphcodebert LINEBREAK in this section, we describe graphcodebert, a graph-based pre-trained model based on transformer for programming language. we introduce model architecture, graph-guided masked attention and pre-training tasks including standard masked language model and newly introduced ones. more details about model pre-training setting are provided in the appendix a. LINEBREAK 2https://github.com/tree-sitter/tree-sitter LINEBREAK figure 2: an illustration about graphcodebert pre-training. the model takes source code paired with comment and the corresponding data flow as the input, and is pre-trained using standard masked language modeling (devlin et al., 2018) and two structure-aware tasks. one structure-aware task is to predict where a variable is identified from (marked with orange lines) and the other is data flow edges prediction between variables (marked with blue lines). LINEBREAK model architecture LINEBREAK figure 2 shows the model architecture of graphcodebert. we follow bert (devlin et al., 2018) and use the multi-layer bidirectional transformer (vaswani et al., 2017) as the model backbone. instead of only using source code, we also utilize paired comments to pre-train the model to support more code-related tasks involving natural language such as natural language code search (feng et al., 2020). we further take data flow, which is a graph, as a part of the input to the model. LINEBREAK given a source code c = {c1, c2, ..., cn} with its comment w = {w1, w2, ..., wm}, we can obtain the corresponding data flow g(c) = (v, e) as discussed in the section 3, where v = {v1, v2, ..., vk} is a set of variables and e = {ε1, ε2, ..., εl} is a set of direct edges that represent where the value of each variable comes from. we concatenate the comment, source code and the set of variables as the sequence input x = {[cls], w, [sep ], c, [sep ], v }, where [cls] is a special token in front of three segments and [sep ] is a special symbol to split two kinds of data types. LINEBREAK graphcodebert takes the sequence x as the input and then converts the sequence into input vectors h 0. for each token, its input vector is constructed by summing the corresponding token and position embeddings. we use a special position embedding for all variables to indicate that they are nodes of data flow. the model applies n transformer layers over the input vectors to produce contextual representations h n = transf ormern(h n−1), n ∈ [1, n ]. each transformer layer contains an architecturally identical transformer that applies a multi-headed self-attention operation (vaswani et al., 2017) followed by a feed forward layer over the input h n−1 in the n-th layer. LINEBREAK gn = ln (m ultiattn(h n−1) + h n−1) h n = ln (f f n (gn) + gn) LINEBREAK where m ultiattn is a multi-headed self-attention mechanism, f f n is a two layers feed forward network, and ln represents a layer normalization operation. for the n-th transformer layer, the output ˆgn of a multi-headed self-attention is computed via: qi = h n−1w q i , ki = h n−1w k i qikt i√ dk LINEBREAK , vi = h n−1w v i LINEBREAK headi = softmax( LINEBREAK + m)vi LINEBREAK ˆgn = [head1; ...; headu]w o (5) n where the previous layer’s output h n−1 ∈ r|x|×dh is linearly projected to a triplet of queries, keys and values using model parameters w q i ∈ rdh×dk , respectively. u is the number of heads, i ,w k i n ∈ rdh×dh is the model parameters. m ∈ r|x|×|x| is a mask dk is the dimension of a head, and w o matrix, where mij is 0 if i-th token is allowed to attend j-th token otherwise −∞. LINEBREAK ,w v LINEBREAK graph-guided masked attention
| 4
|
[
108.249,
698.0240784,
293.2753174,
707.9866784
] |
eIHYL6fpbkA.pdf
| 2,021
| 0
|
LINEBREAK removing undesirable feature contributions using out-of-distribution data LINEBREAK saehyung lee, changhwa park, hyungyu lee, jihun yi, jonghyun lee, sungroh yoon∗ electrical and computer engineering, aiis, asri, inmc, and institute of engineering research seoul national university seoul 08826, south korea {halo8218,omega6464,rucy74,t080205,leejh9611,sryoon}@snu.ac.kr LINEBREAK abstract LINEBREAK several data augmentation methods deploy unlabeled-in-distribution (uid) data to bridge the gap between the training and inference of neural networks. however, these methods have clear limitations in terms of availability of uid data and dependence of algorithms on pseudo-labels. herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (ood) data that are devoid of the abovementioned issues. we show how to improve generalization theoretically using ood data in each learning scenario and complement our theoretical analysis with experiments on cifar-10, cifar-100, and a subset of imagenet. the results indicate that undesirable features are shared even among image data that seem to have little correlation from a human point of view. we also present the advantages of the proposed method through comparison with other data augmentation methods, which can be used in the absence of uid data. furthermore, we demonstrate that the proposed method can further improve the existing state-of-the-art adversarial training. LINEBREAK introduction LINEBREAK the power of the enormous amount of data suggested by the empirical risk minimization (erm) principle (vapnik & vapnik, 1998) has allowed deep neural networks (dnns) to perform outstandingly on many tasks, including computer vision (krizhevsky et al., 2012) and natural language processing (hinton et al., 2012). however, most of the practical problems encountered by dnns have high-dimensional input spaces, and nontrivial generalization errors arise owing to the curse of dimensionality (bellman, 1961). moreover, neural networks have been found to be easily deceived by adversarial perturbations with a high degree of confidence (szegedy et al., 2013). several studies (goodfellow et al., 2014; krizhevsky et al., 2012) have been conducted to address these generalization problems resulting from erm. most of them handled the generalization problems by extending the training distribution (madry et al., 2017; lee et al., 2020). nevertheless, it has been demonstrated that more data are needed to achieve better generalization (schmidt et al., 2018). recent methods (carmon et al., 2019; xie et al., 2019) introduced unlabeled-in-distribution (uid) data to compensate for the lack of training samples. however, there are limitations associated with these methods. first, obtaining suitable uid data for selected classes is challenging. second, when applying supervised learning methods on pseudo-labeled data, the effect of data augmentation depends heavily on the accuracy of the pseudo-label generator. LINEBREAK in our study, in order to break through the limitations outlined above, we propose an approach that promotes robust and standard generalization using out-of-distribution (ood) data. especially, motivated by previous studies demonstrating the existence of common adversarial space among different images or even datasets (naseer et al., 2019; poursaeed et al., 2018), we show that ood data can be leveraged for adversarial learning. likewise, if the ood data share the same undesirable features as those of the in-distribution data in terms of standard generalization, they can be leveraged for standard learning. by definition, in this work, the classes of the ood data differ from those of the in-distribution data, and our method do not use the label information of the ood data. therefore the LINEBREAK ∗correspondence to: sungroh yoon [email protected]. LINEBREAK proposed method is free from the previously mentioned problems caused by uid data. we present a theoretical model which demonstrates how to improve generalization using ood data in both adversarial and standard learning. in our theoretical model, we separate desirable and undesirable features and show how the training on ood data, which shares undesirable features with in-distribution data, changes the weight values of the classifier. based on the theoretical analysis, we introduce out-ofdistribution data augmented training (oat), which assigns a uniform distribution label to all the ood data samples to remove the influence of undesirable features in adversarial and standard learning. in the proposed method, each batch is composed of training data and ood data, and ood data regularize the training so that only features that are strongly correlated with class labels are learned. we complement our theoretical findings with experiments on cifar-10 (krizhevsky et al., 2009), cifar-100 (krizhevsky et al., 2009), and a subset of imagenet (deng et al., 2009). in addition, we present the empirical evidence for the transferability of undesirable features through further studies on various datasets including simpson characters (attia, 2018), fashion product images (aggarwal, 2018), svhn (netzer et al., 2011), places365 (zhou et al., 2017), and visda-17 (peng et al., 2017). LINEBREAK (i) we propose a simple method, out-of-distribution data augmented traincontributions ing (oat), to leverage ood data for adversarial and standard learning, and theoretical analyses demonstrate how our proposed method can improve robust and standard generalization. (ii) the results of experimental procedures on cifar-10, cifar-100, and a subset of imagenet suggest that oat can help reduce the generalization gap in adversarial and standard learning. (iii) by applying oat using various ood datasets, it is shown that undesirable features are shared among diverse image datasets. it is also demonstrated that oat can effectively extend training distribution by comparison with other data augmentation methods that can be employed in the absence of uid data. (iv) the state-of-the-art adversarial training method using uid data is found to further improve by incorporating the proposed method of leveraging ood data. LINEBREAK background LINEBREAK undesirable features in adversarial learning tsipras et al. (2018) demonstrated the existence of a trade-off between standard accuracy and adversarial robustness with the distinction between robust and non-robust features. they showed the possibility that adversarial robustness is incompatible with standard accuracy by constructing a binary classification task that the data model consists of inputlabel pairs (x, y) ∈ rd+1 × {±1} sampled from a distribution as follows: LINEBREAK (cid:26)+y w.p. p LINEBREAK −y w.p. 1 − p LINEBREAK i.i.d.∼ n ((cid:15)y, 1). LINEBREAK here, x1 is a robust feature that is strongly correlated with the label, and the other features x2, . . . , xd+1 are non-robust features that are weakly correlated with the label. (cid:15) is small but sufficiently large such that a simple classifier attains a high standard accuracy, and p ≥ 0.5. to characterize adversarial robustness, the definitions of expected standard loss βs and expected adversarial loss βa for a data distribution d are defined as follows: LINEBREAK βs = e LINEBREAK (x,y)∼d LINEBREAK [l(x, y; θ)] , LINEBREAK βa = e LINEBREAK (x,y)∼d LINEBREAK max δ∈s LINEBREAK l(x + δ, y; θ) LINEBREAK here, l(; θ) is the loss function of the model, and s represents the set of perturbations that the adversary can apply to deceive the model. for equation (1), tsipras et al. (2018) showed that the following classifier can yield a small expected standard loss: LINEBREAK favg(x) = sign(w(cid:62) LINEBREAK unifx), where wunif = LINEBREAK they also proved that the classifier is vulnerable to adversarial perturbations, and that adversarial training results in a classifier that assigns zero weight values to non-robust features. LINEBREAK transferability of adversarial perturbations naseer et al. (2019) produced domain-agnostic adversarial perturbations, thereby showing common adversarial space among different datasets. they showed that an adversarial function trained on paintings, cartoons or medical data can deceive the classifier on imagenet data with a high success rate. the study findings show that even datasets from considerably different domains share non-robust features. therefore a method for supplementing the data needed for adversarial training is presented herein. LINEBREAK undesirable features in standard learning wang et al. (2020) noted that convolutional neural networks (cnn) can capture high-frequency components in images that are almost imperceptible to a human. this ability is thought to be closely related to the generalization behaviors of cnns, especially the capacity in memorizing random labels. several studies (geirhos et al., 2018; bahng et al., 2019) reported that cnns are biased towards local image features, and the generalization performance can be improved by regularizing that bias. in this context, a method of regularizing undesirable feature contributions using ood data is proposed, assuming that undesirable features arise from the bias of cnns or insufficient training data and are widely distributed in the input space. LINEBREAK methods LINEBREAK theoretical motivation LINEBREAK in this section, we analyze theoretically how ood data can be used to make up for the insufficient training samples in adversarial training based on the dichotomy between robust and non-robust features. the theoretical motivation of using ood data to reduce the contribution of undesirable features in standard learning can be found in appendix b. LINEBREAK i=1 and target feature-label pairs {(φ(˜xi), yi)}n LINEBREAK setup and overview we denote in-distribution data as target data. given a target dataset i=1 ⊂ x × {±1} sampled from a data distribution ˜d, where x is the input space, we {(˜xi, yi)}n suppose that a feature extractor φ : x → z ⊂ rd and a linear classification model are trained on {(˜xi, yi)}n i=1, respectively, to yield the small expected standard loss of the classification model. we then define an ood dataset {ˆxi}m i=1 ⊂ x sampled from a data distribution ˆd that has the same distribution of non-robust features as that of ˜d with reference to the preceding studies (naseer et al., 2019; moosavi-dezfooli et al., 2017). after fixing φ to facilitate the theoretical analysis on this framework, we demonstrate how adversarial training on the ood dataset affects the weight values of our classifier. LINEBREAK our data model the feature extractor φ can be considered to consist of several feature extractors φ : x → z ⊂ r. hence, we can set the distributions of the target feature-label pair (φ(˜x), y) = (˜z, y) ∈ rd × {±1} and the ood feature vector φ(ˆx) = ˆz ∈ rd as follows: LINEBREAK i.i.d.∼ n (ηy, 1), i.i.d.∼ n (ηq, 1), LINEBREAK where u.a.r stands for uniformly at random. from here on, we will only deal with the ood data, therefore the accents (tilde and caret) that distinguish between target data and ood data are omitted. in equation (4), the feature z1 = φ1(x) is the output of a robust feature extractor φ1, and the other features z2, . . . , zd+1 are those of non-robust feature extractors φ2, . . . , φd+1. since the ood input vectors do not have the same robust features as the target input vectors, z1 has zero mean and a small variance. furthermore, because the ood data have the same distribution of non-robust features as the target data, z2, . . . , zd+1 have a non-zero mean and a larger variance than the robust features. in addition, q represents the unknown label associated with the non-robust features, and η is a nonnegative constant which represents the degree of correlation between the non-robust features and the unknown label. please note that the input space of our classifier is the output space of φ in our data model. therefore, η is not limited to a small value even in the context of the (cid:96)p-bounded adversary. rather, the high degree of confidence that dnns show for the adversarial examples (goodfellow et al., 2014) suggests that η is large. LINEBREAK our linear classification model according to section 2, we know that our linear classification model (logistic regression), defined as follows, yields a low expected standard loss while demonstrating high adversarial vulnerability. LINEBREAK p(y = +1 | z) = σ(w(cid:62)z), p(y = −1 | z) = 1 − σ(w(cid:62)z), where w = LINEBREAK to observe the effect of adversarial training on the ood dataset, we train our classifier by applying the stochastic gradient descent algorithm to the cross-entropy loss function l(; w). LINEBREAK firstly, we construct the adversarial feature vector ¯z = φ(x + δ) : δ ∈ s against our classifier for adversarial training. theorem 1. let t ∈ [0, 1] be the given target value of the feature vector z in our classification model, and λ be a non-negative constant. then, when t = 0.5, the expectation of the adversarial feature vector is LINEBREAK ez [¯z1] = ez [z1] , ez [¯zk] ≈ ez [zk] + λ · q, where k ∈ {2, . . . , d + 1}. LINEBREAK (all the proofs of the theorems in this paper can be found in appendix a.) here, we assume the (cid:96)∞-bounded adversary. in theorem 1, we can observe that the adversary pushes the non-robust features farther in the direction of the unknown label q, which coincides with our intuition. when the given target value t is 0.5, the adversary will make our classification model output equal to zero or one to yield a large loss. LINEBREAK our classification model is trained on the adversarial features shown in theorem 1. theorem 2. when t = 0.5, the expected gradient of the loss function l(¯z, t; w) with respect to the weight vector w of our classification model is (cid:21) LINEBREAK (η + λ), where k ∈ {2, . . . , d + 1}. LINEBREAK e¯z LINEBREAK (cid:20) ∂l ∂wk LINEBREAK thus, the adversarial training with t = 0.5 on the ood dataset leads to the weight values corresponding to the non-robust features converging to zero while preserving w1 from the gradient update. this shows that we can reduce the impact of non-robust features using the ood dataset. we, however, should not only reduce the influence of the non-robust features, but also improve the classification accuracy using the robust feature. this can be achieved through the adversarial training on the target dataset. accordingly, we show the effect of the adversarial training on the ood dataset when w1 > 0 in our example. theorem 3. when t = 0.5 and w1 > 0, the expected gradient of the loss function l(¯z, t; w) with respect to the weight vector w of our classification model is LINEBREAK e¯z LINEBREAK λ, e¯z LINEBREAK (cid:20) ∂l ∂wk LINEBREAK (η + λ), where k ∈ {2, . . . , d + 1}. LINEBREAK theorem 3 shows that when w1 > 0, the adversarial training with t = 0.5 on the ood dataset reduces the influence of all the features in z. however, we can see that the expected gradients for the weight values associated with the non-robust features are always greater than the expected gradient for the weight value associated with the robust feature. in addition, the greater the value of η, the faster the weight value associated with it converges to zero. this means that the contribution of non-robust features with high influence decreases rapidly. in the case of multiclass classification, it is straightforward that t = 0.5 corresponds to uniform distribution label tunif = [ 1 c ], where c is the number of the classes. intuitively, the meaning of (x, tunif) is that the input x lies on the decision boundary. to reduce the training loss for (x, tunif), the classifier will learn that the features of x do not contribute to a specific class, which can be understood as removing the contributions of the features of x. LINEBREAK out-of-distribution data augmented training LINEBREAK based on our theoretical analysis, we introduce out-of-distribution data augmented training (oat). oat is the training on the union of the target dataset dt and the ood dataset do. when applying our proposed method, we need to consider the following two points: 1) temporary labels associated with ood data samples are required for supervised learning, and 2) the loss functions corresponding to dt and do should be properly combined. LINEBREAK first, we assign a uniform distribution label tunif to all the ood data samples as confirmed in our theoretical analysis. this labeling method enables us to leverage ood data for supervised learning at no extra cost. moreover, it means that our method is completely free from the limitations of the methods using uid data (see section 1). LINEBREAK second, although ood data can be used to improve the standard and robust generalization of neural networks, the training on target data is essential to enhance the classification accuracy of neural LINEBREAK networks. in addition, according to theorem 3, adversarial training on the pairs of ood data samples and tunif affects the weight for robust features as well as that for non-robust features. hence, the balance between losses from dt and do is important in oat. for this reason, we introduce a hyperparameter α ∈ r+ into our proposed method and train neural networks as follows: LINEBREAK oat-a: min LINEBREAK e (xt,y)∈dt oat-s: min LINEBREAK max δ∈s LINEBREAK l(xt + δ, y; θ) LINEBREAK + α e LINEBREAK xo∈do LINEBREAK max (cid:15)∈s LINEBREAK (cid:21) l(xo + (cid:15), tunif; θ) LINEBREAK e (xt,y)∈dt LINEBREAK [l(xt, y; θ)] + α e LINEBREAK xo∈do LINEBREAK [l(xo, tunif; θ)] . LINEBREAK here, oat-a and oat-s represent ood data augmented adversarial and standard learning, respectively. the pseudo-code for the overall procedure of our method is presented in algorithm 1. LINEBREAK algorithm 1 out-of-distribution augmented training (oat) require: target dataset dt, ood dataset do, uniform distribution label tunif, batch size n, training LINEBREAK iterations t , learning rate τ , hyperparameter α, adversarial attack function g LINEBREAK (xt, y ) = sample(dataset = dt, size = n 2 ) LINEBREAK if adversarial learning then LINEBREAK (cid:2) ¯xt, ¯xo (cid:2) ¯xt, ¯xo LINEBREAK else if standard learning then LINEBREAK 1: for t = 1 to t do 2: 3: xo = sample(dataset = do, size = n 2 ) 4: (cid:3) ← g([xt, xo] , [y, tunif] ; θ) 5: 6: (cid:3) ← [xt, xo] 7: 8: 9: 10: 11: end for 12: output: trained model parameter θ LINEBREAK end if model update: θ ← θ − τ · ∇θaverage( 1 LINEBREAK 2 l( ¯xt, y ; θ) + α LINEBREAK 2 l( ¯xo, tunif; θ)) LINEBREAK related studies LINEBREAK adversarial examples many adversarial attack methods have been proposed, including the projected gradient descent (pgd) and carlini & wagner (cw) attacks (madry et al., 2017; carlini & wagner, 2017). the pgd attack employs an iterative procedure of the fast gradient sign method (fgsm) (goodfellow et al., 2014) to find worst-case examples in which the training loss is maximized. cw attack finds adversarial examples using cw losses instead of cross-entropy losses. recently, croce & hein (2020) proposed autoattack (aa), which is a powerful ensemble attack with two extensions of the pgd attack and two existing attacks (croce & hein, 2019; andriushchenko et al., 2019). to defend against these adversarial attacks, various adversarial defense methods have been developed (goodfellow et al., 2014; kannan et al., 2018). madry et al. (2017) introduced adversarial training that uses adversarial examples as training data, and zhang et al. (2019) proposed trades to optimize a surrogate loss which is a sum of the natural error and boundary error. LINEBREAK ood detection lee et al. (2017) and hendrycks et al. (2018) dealt with the overconfidence problem of the confidence score-based ood detectors. they used uniform distribution labels as in our method to resolve the overconfidence issue. they did not, however, address the generalization problems of neural networks. specifically, our theoretical results allow us to explain the classification performance improvement of classifiers that were only considered as secondary effects in the abovementioned studies. further related works can be found in appendix c. LINEBREAK experimental results and discussion LINEBREAK experimental setup LINEBREAK ood datasets we created ood datasets from the 80 million tiny images dataset (torralba et al., 2008) (80m-ti), using the work of carmon et al. (2019) for cifar-10 and cifar-100, respectively. in addition, we resized (using a bilinear interpolation) imagenet to dimensions of 64 × 64 and LINEBREAK table 1: accuracy (%) comparison of the oat model with standard, pgd, and trades on cifar10, cifar100, and imgnet10 (64×64) under different threat models. we show the improved results compared to the counterpart of each model in bold. LINEBREAK model LINEBREAK target
| 5
|
[
195.9747148,
657.4760784,
221.000766,
667.4386784
] |
7grkzyj89A_.pdf
| 2,022
| 0
|
LINEBREAK generalization through the lens of leaveone-out error LINEBREAK gregor bachmann a, thomas hofmanna, and aur´elien lucchib LINEBREAK adepartment of computer science, eth z¨urich, switzerland bdepartment of mathematics and computer science, university of basel , {gregor.bachmann, thomas.hofmann}@inf.ethz.ch [email protected] LINEBREAK abstract LINEBREAK despite the tremendous empirical success of deep learning models to solve various learning tasks, our theoretical understanding of their generalization ability is very limited. classical generalization bounds based on tools such as the vc dimension or rademacher complexity, are so far unsuitable for deep models and it is doubtful that these techniques can yield tight bounds even in the most idealistic settings (nagarajan & kolter, 2019). in this work, we instead revisit the concept of leave-one-out (loo) error to measure the generalization ability of deep models in the so-called kernel regime. while popular in statistics, the loo error has been largely overlooked in the context of deep learning. by building upon the recently established connection between neural networks and kernel learning, we leverage the closed-form expression for the leave-one-out error, giving us access to an efficient proxy for the test error. we show both theoretically and empirically that the leave-one-out error is capable of capturing various phenomena in generalization theory, such as double descent, random labels or transfer learning. our work therefore demonstrates that the leave-one-out error provides a tractable way to estimate the generalization ability of deep neural networks in the kernel regime, opening the door to potential, new research directions in the field of generalization. LINEBREAK introduction LINEBREAK neural networks have achieved astonishing performance across many learning tasks such as in computer vision (he et al., 2016), natural language processing (devlin et al., 2019) and graph learning (kipf & welling, 2017) among many others. despite the large overparametrized nature of these models, they have shown a surprising ability to generalize well to unseen data. the theoretical understanding of this generalization ability has been an active area of research where contributions have been made using diverse analytical tools (arora et al., 2018; bartlett et al., 2019; 2017; neyshabur et al., 2015; 2018). yet, the theoretical bounds derived by these approaches typically suffer from one or several of the following limitations: i) they are known to be loose or even vacuous (jiang et al., 2019), ii) they only apply to randomized networks (dziugaite & roy, 2018; 2017), and iii) they rely on the concept of uniform convergence which was shown to be non-robust against adversarial perturbations (nagarajan & kolter, 2019). in this work, we revisit the concept of leave-one-out error (loo) and demonstrate its surprising ability to predict generalization for deep neural networks in the so-called kernel regime. this object is an important statistical estimator of the generalization performance of an algorithm that is theoretically motivated by a connection to the concept of uniform stability (pontil, 2002). it has also recently been advocated by nagarajan & kolter (2019) as a potential substitute for uniform convergence bounds. despite being popular in classical statistics (cawley & talbot, 2003; vehtari et al., 2016; fukunaga & hummels, 1989; zhang, 2003), loo has largely been overlooked in the context of deep learning, likely due to its high computational cost. however, recent advances from the neural tangent kernel perspective (jacot et al., 2018) render the loo error all of a sudden tractable thanks to the availability of a closed-form expression due to stone (1974, eq. 3.13). while the role of the loo error as an estimator of the generalization performance is debated in the statistics community LINEBREAK (zhang & yang, 2015; bengio & grandvalet, 2004; breiman, 1996; kearns & ron, 1999), the recent work by patil et al. (2021) established a consistency result in the case of ridge regression, ensuring that loo converges to the generalization error in the large sample limit, under mild assumptions on the data distribution. inspired by these recent advances, in this work, we investigate the use of loo as a generalization measure for deep neural networks in the kernel regime. we find that loo is a surprisingly rich descriptor of the generalization error, capturing a wide range of phenomena such as random label fitting, double descent and transfer learning. specifically, we make the following contributions: LINEBREAK • we extend the loo expression for the multi-class setting to the case of zero regularization LINEBREAK and derive a new closed-form formula for the resulting loo accuracy. LINEBREAK • we investigate both the loo loss and accuracy in the context of deep learning through the lens of the ntk, and demonstrate empirically that they both capture the generalization ability of networks in the kernel regime in a variety of settings. LINEBREAK • we showcase the utility of loo for practical networks by accurately predicting their trans LINEBREAK fer learning performance. LINEBREAK • we build on the mathematically convenient form of the loo loss to derive some novel LINEBREAK insights into double descent and the role of regularization. LINEBREAK our work is structured as follows. we give an overview of related works in section 2. in section 3 we introduce the setting along with the loo error and showcase our extensions to the multi-class case and loo accuracy. then, in section 4 we analyze the predictive power of loo in various settings and present our theoretical results on double descent. we discuss our findings and future work in section 5. we release the code for our numerical experiments on github1. LINEBREAK related work LINEBREAK originally introduced by lachenbruch (1967); mosteller & tukey (1968), the leave-one-out error is a standard tool in the field of statistics that has been used to study and improve a variety of models, ranging from support vector machines (weston, 1999), fisher discriminant analysis (cawley & talbot, 2003), linear regression (hastie et al., 2020; patil et al., 2021) to non-parametric density estimation (fukunaga & hummels, 1989). various theoretical perspectives have justified the use of the loo error, including for instance the framework of uniform stability (bousquet & elisseeff, 2002) or generalization bounds for knn models (devroye & wagner, 1979). elisseeff et al. (2003) focused on the stability of a learning algorithm to demonstrate a formal link between loo error and generalization. the vanilla definition of the loo requires access to a large number of trained models which is typically computationally expensive. crucially, stone (1974) derived an elegant closed-form formula for kernels, completely removing the need for training multiple models. in the context of deep learning, the loo error remains largely unexplored. shekkizhar & ortega (2020) propose to fit polytope interpolators to a given neural network and estimate its resulting loo error. alaa & van der schaar (2020) rely on a jackknife estimator, applied in a loo fashion to obtain better confidence intervals for bayesian neural networks. finally, we want to highlight the concurrent work of zhang & zhang (2021) who analyze the related concept of influence functions in the context of deep models. LINEBREAK recent works have established a connection between kernel regression and deep learning. surprisingly, it is shown that an infinitely-wide, fully-connected network behaves like a kernel both at initialization (neal, 1996; lee et al., 2018; de g. matthews et al., 2018) as well as during gradient flow training (jacot et al., 2018). follow-up works have extended this result to the non-asymptotic setting (arora et al., 2019a; lee et al., 2019), proving that wide enough networks trained with small learning rates are in the so-called kernel regime, i.e. essentially evolving like their corresponding tangent kernel. moreover, the analysis has also been adapted to various architectures (arora et al., 2019a; huang et al., 2020; du et al., 2019; hron et al., 2020; yang, 2020) as well as discrete gradient descent (lee et al., 2019). the direct connection to the field of kernel regression enables the usage of the closed-form expression for loo error in the context of deep learning, which to the best of our knowledge has not been explored yet. LINEBREAK 1https://github.com/gregorbachmann/leaveoneout LINEBREAK setting and background LINEBREAK in the following, we establish the notations required to describe the problem of interest and the setting we consider. we study the standard learning setting, where we are given a dataset of inputtarget pairs s = {(x1, y1), . . . , (xn, yn)} where each (xi, yi) i.i.d.∼ d for i = 1, . . . , n is distributed according to some probability measure d. we refer to xi ∈ x ⊂ rd as the input and to y ∈ y ⊂ rc as the target. we will often use matrix notations to obtain more compact formulations, thus summarizing all inputs and targets as matrices x ∈ rn×d and y ∈ rn×c. we will mainly be concerned with the task of multiclass classification, where targets yi are encoded as one-hot vectors. we consider a function space f and an associated loss function lf : x × y −→ r that measures how well a given function f ∈ f performs at predicting the targets. more concretely, we define the regularized empirical error as LINEBREAK ˆlλ s : f −→ r, f (cid:55)→ LINEBREAK lf (xi, yi) + λω(f ), LINEBREAK i=1 where λ > 0 is a regularization parameter and ω : f −→ r is a functional. for any vector v ∈ rc, let v∗ = argmaxk≤c vk. using this notation, we let af (x, y) := 1{f ∗(x)=y∗}. we then define the empirical accuracy ˆas as the average number of instances for which f ∈ f provides the correct prediction, i.e. LINEBREAK af (xi, yi) LINEBREAK finally, we introduce a learning algorithm qf : (x × y)n −→ f, that given a function class f and a training set s ∈ (x × y)n, chooses a function f ∈ f. we will exclusively be concerned with learning through empirical risk minimization, i.e. LINEBREAK qλ LINEBREAK f (s) := ˆf λ LINEBREAK s := argminf ∈f LINEBREAK ˆlλ LINEBREAK s (f ), LINEBREAK and use the shortcut ˆfs := ˆf λ=0 s the generalization loss and accuracy of the model, defined as LINEBREAK . in practice however, the most important measures are given by LINEBREAK l : f −→ r, f (cid:55)→ e(x,y)∼d [lf (x, y)] , LINEBREAK a : f −→ [0, 1], f (cid:55)→ p(x,y)∼d (f ∗(x) = y∗) . LINEBREAK a central goal in machine learning is to control the difference between the empirical error ˆls (f ) and the generalization error l(f ). in this work, we mainly focus on the mean squared loss, LINEBREAK lf (x, y) = LINEBREAK (fk(x) − yk)2, LINEBREAK and also treat classification as a regression task, as commonly done in the literature (lee et al., 2018; chen et al., 2020; arora et al., 2019a; hui & belkin, 2021; bachmann et al., 2021). finally, we consider the function spaces induced by kernels and neural networks, presented in the following. LINEBREAK kernel learning LINEBREAK a kernel k : x × x −→ r is a symmetric, positive semi-definite function, i.e. k(x, x(cid:48)) = k(x(cid:48), x) ∀ x, x(cid:48) ∈ x and for any set {x1, . . . , xn} ⊂ x , the matrix k ∈ rn×n with entries kij = k(xi, xj) is positive semi-definite. it can be shown that a kernel induces a reproducing kernel hilbert space, a function space equipped with an inner product (cid:104)·, ·(cid:105)h, containing elements LINEBREAK h = cl LINEBREAK (cid:16)(cid:8)f : fk(·) = LINEBREAK αikk(·, xi) for α ∈ rn×c, xi ∈ x (cid:9)(cid:17) LINEBREAK where cl is the closure operator. although h is an infinite dimensional space, it turns out that the minimizer ˆf λ LINEBREAK s can be obtained in closed form for mean squared loss, given by LINEBREAK ˆf λ s (x) = kt LINEBREAK x (k + λ1n)−1 y λ−→0−−−→ ˆfs (x) := kt LINEBREAK x k†y LINEBREAK where the regularization functional is induced by the inner product of the space, ω(f ) = ||f ||2 h, k ∈ rn×n has entries kij = k(xi, xj) and kx ∈ rn has entries (kx)i = k(x, xi). a† denotes the pseudo-inverse of a ∈ rn×n. we will analyze both the regularized ( ˆf λ s ) and unregularized model ( ˆfs ) and their associated loo errors in the following. for a more in-depth discussion of kernels, we refer the reader to hofmann et al. (2008); paulsen & raghupathi (2016). LINEBREAK neural networks and associated kernels LINEBREAK another function class that has seen a rise in popularity is given by the family of fully-connected neural networks of depth l ∈ n, LINEBREAK fn n =
| 3
|
[
124.997,
569.2490828,
157.28891028,
580.1268556
] |
fvLLcIYmXb.pdf
| 2,022
| 1
| " LINEBREAK as-mlp: an axial shifted mlp architecture for vision LINEBREAK dongze lian∗, zehao yu(...TRUNCATED)
| 5
|
[
108.299,
179.8876768,
200.0834953,
191.8428768
] |
n-bvaLSCC78.pdf
| 2,023
| 0
| " LINEBREAK ea-has-bench: energy-aware hyperparameter and architecture search benchmark LINEBREAK s(...TRUNCATED)
| 3
|
[
290.555,
67.0600828,
369.90591028,
77.9378556
] |
End of preview. Expand
in Data Studio
- Downloads last month
- 257